(view as text)
[?1034hscons: Reading SConscript files ...
scons version: 2.1.0.alpha.20101125
python version: 2 6 4 'final' 0
Checking whether the C++ compiler works(cached) yes
Checking for C header file unistd.h... (cached) yes
Checking whether clock_gettime is declared... (cached) yes
Checking for C library rt... (cached) yes
Checking for C++ header file execinfo.h... (cached) yes
Checking whether backtrace is declared... (cached) yes
Checking whether backtrace_symbols is declared... (cached) yes
Checking for C library pcap... (cached) no
Checking for C library wpcap... (cached) no
scons: done reading SConscript files.
scons: Building targets ...
generate_buildinfo(["build/buildinfo.cpp"], ['\n#include <string>\n#include <boost/version.hpp>\n\n#include "mongo/util/version.h"\n\nnamespace mongo {\n const char * gitVersion() { return "%(git_version)s"; }\n std::string sysInfo() { return "%(sys_info)s BOOST_LIB_VERSION=" BOOST_LIB_VERSION ; }\n} // namespace mongo\n'])
/usr/bin/python /home/yellow/buildslave/Linux_32bit_debug/mongo/buildscripts/smoke.py mongosTest
cwd [/home/yellow/buildslave/Linux_32bit_debug/mongo]
num procs:172
removing: /data/db/sconsTests//mongod.lock
Wed Jun 13 22:28:45
Wed Jun 13 22:28:45 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Wed Jun 13 22:28:45
Wed Jun 13 22:28:45 [initandlisten] MongoDB starting : pid=9194 port=27999 dbpath=/data/db/sconsTests/ 32-bit host=tp2.10gen.cc
Wed Jun 13 22:28:45 [initandlisten] _DEBUG build (which is slower)
Wed Jun 13 22:28:45 [initandlisten]
Wed Jun 13 22:28:45 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
Wed Jun 13 22:28:45 [initandlisten] ** Not recommended for production.
Wed Jun 13 22:28:45 [initandlisten]
Wed Jun 13 22:28:45 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Wed Jun 13 22:28:45 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
Wed Jun 13 22:28:45 [initandlisten] ** with --journal, the limit is lower
Wed Jun 13 22:28:45 [initandlisten]
Wed Jun 13 22:28:45 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
Wed Jun 13 22:28:45 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
Wed Jun 13 22:28:45 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
Wed Jun 13 22:28:45 [initandlisten] options: { dbpath: "/data/db/sconsTests/", port: 27999 }
Wed Jun 13 22:28:45 [initandlisten] opening db: local
Wed Jun 13 22:28:45 [initandlisten] waiting for connections on port 27999
Wed Jun 13 22:28:45 [websvr] admin web console waiting for connections on port 28999
Wed Jun 13 22:28:46 [initandlisten] connection accepted from 127.0.0.1:53793 #1 (1 connection now open)
running /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 27999 --dbpath /data/db/sconsTests/
*******************************************
Test : mongos ...
Command : /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos --test
Date : Wed Jun 13 22:28:46 2012
Wed Jun 13 22:28:46 [conn1] end connection 127.0.0.1:53793 (0 connections now open)
Wed Jun 13 22:28:46 versionCmpTest passed
Wed Jun 13 22:28:46 versionArrayTest passed
Wed Jun 13 22:28:46 _inBalancingWindow: now: 2012-Jun-13 13:48:00 startTime: 2012-Jun-13 09:00:00 stopTime: 2012-Jun-13 11:00:00
Wed Jun 13 22:28:46 _inBalancingWindow: now: 2012-Jun-13 13:48:00 startTime: 2012-Jun-13 17:00:00 stopTime: 2012-Jun-13 21:30:00
Wed Jun 13 22:28:46 _inBalancingWindow: now: 2012-Jun-13 13:48:00 startTime: 2012-Jun-13 11:00:00 stopTime: 2012-Jun-13 17:00:00
Wed Jun 13 22:28:46 _inBalancingWindow: now: 2012-Jun-13 13:48:00 startTime: 2012-Jun-13 21:30:00 stopTime: 2012-Jun-13 17:00:00
Wed Jun 13 22:28:46 warning: must specify both start and end of balancing window: { start: 1 }
Wed Jun 13 22:28:46 warning: must specify both start and end of balancing window: { stop: 1 }
Wed Jun 13 22:28:46 warning: cannot parse active window (use hh:mm 24hs format): { start: "21:30", stop: "28:35" }
Wed Jun 13 22:28:46 BalancingWidowObjTest passed
Wed Jun 13 22:28:46 shardKeyTest passed
Wed Jun 13 22:28:46 shardObjTest passed
Wed Jun 13 22:28:46 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
Wed Jun 13 22:28:46 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
Wed Jun 13 22:28:46 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
Wed Jun 13 22:28:46 Matcher::matches() { abcdef: "z23456789" }
Wed Jun 13 22:28:46 Matcher::matches() { abcd: 3.1, abcdef: "123456789" }
Wed Jun 13 22:28:46 Matcher::matches() { abcdef: "z23456789" }
tests passed
14.967203ms
Wed Jun 13 22:28:46 [initandlisten] connection accepted from 127.0.0.1:53794 #2 (1 connection now open)
Wed Jun 13 22:28:46 got signal 15 (Terminated), will terminate after current cmd ends
Wed Jun 13 22:28:46 [interruptThread] now exiting
Wed Jun 13 22:28:46 dbexit:
Wed Jun 13 22:28:46 [interruptThread] shutdown: going to close listening sockets...
Wed Jun 13 22:28:46 [interruptThread] closing listening socket: 5
Wed Jun 13 22:28:46 [interruptThread] closing listening socket: 8
Wed Jun 13 22:28:46 [interruptThread] closing listening socket: 9
Wed Jun 13 22:28:46 [interruptThread] removing socket file: /tmp/mongodb-27999.sock
Wed Jun 13 22:28:46 [interruptThread] shutdown: going to flush diaglog...
Wed Jun 13 22:28:46 [interruptThread] shutdown: going to close sockets...
Wed Jun 13 22:28:46 [interruptThread] shutdown: waiting for fs preallocator...
Wed Jun 13 22:28:46 [interruptThread] shutdown: closing all files...
Wed Jun 13 22:28:46 [interruptThread] closeAllFiles() finished
Wed Jun 13 22:28:46 [interruptThread] shutdown: removing fs lock...
Wed Jun 13 22:28:46 dbexit: really exiting now
1 tests succeeded
/usr/bin/python /home/yellow/buildslave/Linux_32bit_debug/mongo/buildscripts/smoke.py sharding
cwd [/home/yellow/buildslave/Linux_32bit_debug/mongo]
num procs:172
removing: /data/db/sconsTests//mongod.lock
Wed Jun 13 22:29:06
Wed Jun 13 22:29:06 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Wed Jun 13 22:29:06
Wed Jun 13 22:29:06 [initandlisten] MongoDB starting : pid=9214 port=27999 dbpath=/data/db/sconsTests/ 32-bit host=tp2.10gen.cc
Wed Jun 13 22:29:06 [initandlisten] _DEBUG build (which is slower)
Wed Jun 13 22:29:06 [initandlisten]
Wed Jun 13 22:29:06 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
Wed Jun 13 22:29:06 [initandlisten] ** Not recommended for production.
Wed Jun 13 22:29:06 [initandlisten]
Wed Jun 13 22:29:06 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Wed Jun 13 22:29:06 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
Wed Jun 13 22:29:06 [initandlisten] ** with --journal, the limit is lower
Wed Jun 13 22:29:06 [initandlisten]
Wed Jun 13 22:29:06 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
Wed Jun 13 22:29:06 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
Wed Jun 13 22:29:06 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
Wed Jun 13 22:29:06 [initandlisten] options: { dbpath: "/data/db/sconsTests/", port: 27999 }
Wed Jun 13 22:29:06 [initandlisten] opening db: local
Wed Jun 13 22:29:06 [initandlisten] waiting for connections on port 27999
Wed Jun 13 22:29:06 [websvr] admin web console waiting for connections on port 28999
Wed Jun 13 22:29:07 [initandlisten] connection accepted from 127.0.0.1:53795 #1 (1 connection now open)
running /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 27999 --dbpath /data/db/sconsTests/
*******************************************
Test : addshard1.js ...
Command : /home/yellow/buildslave/Linux_32bit_debug/mongo/mongo --port 27999 --nodb /home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/addshard1.js --eval TestData = new Object();TestData.testPath = "/home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/addshard1.js";TestData.testFile = "addshard1.js";TestData.testName = "addshard1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Wed Jun 13 22:29:07 2012
Wed Jun 13 22:29:07 [conn1] end connection 127.0.0.1:53795 (0 connections now open)
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/add_shard10'
Wed Jun 13 22:29:07 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30000 --dbpath /data/db/add_shard10
m30000| Wed Jun 13 22:29:07
m30000| Wed Jun 13 22:29:07 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Wed Jun 13 22:29:07
m30000| Wed Jun 13 22:29:07 [initandlisten] MongoDB starting : pid=9225 port=30000 dbpath=/data/db/add_shard10 32-bit host=tp2.10gen.cc
m30000| Wed Jun 13 22:29:07 [initandlisten] _DEBUG build (which is slower)
m30000| Wed Jun 13 22:29:07 [initandlisten]
m30000| Wed Jun 13 22:29:07 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Wed Jun 13 22:29:07 [initandlisten] ** Not recommended for production.
m30000| Wed Jun 13 22:29:07 [initandlisten]
m30000| Wed Jun 13 22:29:07 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Wed Jun 13 22:29:07 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Wed Jun 13 22:29:07 [initandlisten] ** with --journal, the limit is lower
m30000| Wed Jun 13 22:29:07 [initandlisten]
m30000| Wed Jun 13 22:29:07 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Wed Jun 13 22:29:07 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Wed Jun 13 22:29:07 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30000| Wed Jun 13 22:29:07 [initandlisten] options: { dbpath: "/data/db/add_shard10", port: 30000 }
m30000| Wed Jun 13 22:29:07 [initandlisten] opening db: local
m30000| Wed Jun 13 22:29:07 [initandlisten] waiting for connections on port 30000
m30000| Wed Jun 13 22:29:07 [websvr] admin web console waiting for connections on port 31000
m30000| Wed Jun 13 22:29:07 [initandlisten] connection accepted from 127.0.0.1:56757 #1 (1 connection now open)
"localhost:30000"
m30000| Wed Jun 13 22:29:07 [initandlisten] connection accepted from 127.0.0.1:56758 #2 (2 connections now open)
m30000| Wed Jun 13 22:29:07 [conn2] opening db: config
ShardingTest add_shard1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000
]
}
m30000| Wed Jun 13 22:29:07 [FileAllocator] allocating new datafile /data/db/add_shard10/config.ns, filling with zeroes...
m30000| Wed Jun 13 22:29:07 [FileAllocator] creating directory /data/db/add_shard10/_tmp
Wed Jun 13 22:29:07 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos --port 30999 --configdb localhost:30000
m30999| Wed Jun 13 22:29:07 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Wed Jun 13 22:29:07 [mongosMain] MongoS version 2.1.2-pre- starting: pid=9240 port=30999 32-bit host=tp2.10gen.cc (--help for usage)
m30999| Wed Jun 13 22:29:07 [mongosMain] _DEBUG build
m30999| Wed Jun 13 22:29:07 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Wed Jun 13 22:29:07 [mongosMain] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30999| Wed Jun 13 22:29:07 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Wed Jun 13 22:29:07 [initandlisten] connection accepted from 127.0.0.1:56760 #3 (3 connections now open)
m30000| Wed Jun 13 22:29:07 [FileAllocator] done allocating datafile /data/db/add_shard10/config.ns, size: 16MB, took 0.047 secs
m30000| Wed Jun 13 22:29:07 [FileAllocator] allocating new datafile /data/db/add_shard10/config.0, filling with zeroes...
m30000| Wed Jun 13 22:29:07 [FileAllocator] done allocating datafile /data/db/add_shard10/config.0, size: 16MB, took 0.038 secs
m30000| Wed Jun 13 22:29:07 [conn2] datafileheader::init initializing /data/db/add_shard10/config.0 n:0
m30000| Wed Jun 13 22:29:07 [FileAllocator] allocating new datafile /data/db/add_shard10/config.1, filling with zeroes...
m30000| Wed Jun 13 22:29:07 [conn2] build index config.settings { _id: 1 }
m30000| Wed Jun 13 22:29:07 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:07 [initandlisten] connection accepted from 127.0.0.1:56761 #4 (4 connections now open)
m30000| Wed Jun 13 22:29:07 [conn4] build index config.version { _id: 1 }
m30000| Wed Jun 13 22:29:07 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:29:07 [Balancer] about to contact config servers and shards
m30000| Wed Jun 13 22:29:07 [conn3] build index config.chunks { _id: 1 }
m30999| Wed Jun 13 22:29:07 [mongosMain] waiting for connections on port 30999
m30000| Wed Jun 13 22:29:07 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:07 [conn3] info: creating collection config.chunks on add index
m30000| Wed Jun 13 22:29:07 [conn3] build index config.chunks { ns: 1, min: 1 }
m30000| Wed Jun 13 22:29:07 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:07 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Wed Jun 13 22:29:07 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:07 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Wed Jun 13 22:29:07 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:07 [conn3] build index config.shards { _id: 1 }
m30000| Wed Jun 13 22:29:07 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:07 [conn3] info: creating collection config.shards on add index
m30000| Wed Jun 13 22:29:07 [conn3] build index config.shards { host: 1 }
m30000| Wed Jun 13 22:29:07 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:29:07 [Balancer] config servers and shards contacted successfully
m30999| Wed Jun 13 22:29:07 [Balancer] balancer id: tp2.10gen.cc:30999 started at Jun 13 22:29:07
m30999| Wed Jun 13 22:29:07 [websvr] admin web console waiting for connections on port 31999
m30999| Wed Jun 13 22:29:07 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:29:07 [initandlisten] connection accepted from 127.0.0.1:56762 #5 (5 connections now open)
m30000| Wed Jun 13 22:29:07 [conn4] build index config.mongos { _id: 1 }
m30000| Wed Jun 13 22:29:07 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:29:07 [LockPinger] creating distributed lock ping thread for localhost:30000 and process tp2.10gen.cc:30999:1339644547:1804289383 (sleeping for 30000ms)
m30000| Wed Jun 13 22:29:07 [conn3] build index config.lockpings { _id: 1 }
m30000| Wed Jun 13 22:29:07 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:07 [conn5] build index config.locks { _id: 1 }
m30000| Wed Jun 13 22:29:07 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:07 [conn3] build index config.lockpings { ping: 1 }
m30000| Wed Jun 13 22:29:07 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Wed Jun 13 22:29:07 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644547:1804289383' acquired, ts : 4fd95a838108a29420109a77
m30999| Wed Jun 13 22:29:07 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644547:1804289383' unlocked.
m30000| Wed Jun 13 22:29:07 [initandlisten] connection accepted from 127.0.0.1:56763 #6 (6 connections now open)
m30000| Wed Jun 13 22:29:07 [FileAllocator] done allocating datafile /data/db/add_shard10/config.1, size: 32MB, took 0.069 secs
m30999| Wed Jun 13 22:29:08 [mongosMain] connection accepted from 127.0.0.1:49921 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Wed Jun 13 22:29:08 [conn] couldn't find database [admin] in config db
m30000| Wed Jun 13 22:29:08 [conn6] build index config.databases { _id: 1 }
m30000| Wed Jun 13 22:29:08 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:29:08 [conn] put [admin] on: config:localhost:30000
m30999| Wed Jun 13 22:29:08 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
m30000| Wed Jun 13 22:29:08 [initandlisten] connection accepted from 127.0.0.1:56765 #7 (7 connections now open)
m30999| Wed Jun 13 22:29:08 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd95a838108a29420109a76
Wed Jun 13 22:29:08 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 29000 --dbpath /data/db/29000 --noprealloc --smallfiles --oplogSize 40 --nohttpinterface
m29000| note: noprealloc may hurt performance in many applications
m29000| Wed Jun 13 22:29:08
m29000| Wed Jun 13 22:29:08 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Wed Jun 13 22:29:08
m29000| Wed Jun 13 22:29:08 [initandlisten] MongoDB starting : pid=9262 port=29000 dbpath=/data/db/29000 32-bit host=tp2.10gen.cc
m29000| Wed Jun 13 22:29:08 [initandlisten] _DEBUG build (which is slower)
m29000| Wed Jun 13 22:29:08 [initandlisten]
m29000| Wed Jun 13 22:29:08 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Wed Jun 13 22:29:08 [initandlisten] ** Not recommended for production.
m29000| Wed Jun 13 22:29:08 [initandlisten]
m29000| Wed Jun 13 22:29:08 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Wed Jun 13 22:29:08 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Wed Jun 13 22:29:08 [initandlisten] ** with --journal, the limit is lower
m29000| Wed Jun 13 22:29:08 [initandlisten]
m29000| Wed Jun 13 22:29:08 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Wed Jun 13 22:29:08 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Wed Jun 13 22:29:08 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m29000| Wed Jun 13 22:29:08 [initandlisten] options: { dbpath: "/data/db/29000", nohttpinterface: true, noprealloc: true, oplogSize: 40, port: 29000, smallfiles: true }
m29000| Wed Jun 13 22:29:08 [initandlisten] opening db: local
m29000| Wed Jun 13 22:29:08 [initandlisten] waiting for connections on port 29000
m29000| Wed Jun 13 22:29:08 [initandlisten] connection accepted from 127.0.0.1:35737 #1 (1 connection now open)
m29000| Wed Jun 13 22:29:08 [conn1] opening db: testDB
m29000| Wed Jun 13 22:29:08 [FileAllocator] allocating new datafile /data/db/29000/testDB.ns, filling with zeroes...
m29000| Wed Jun 13 22:29:08 [FileAllocator] creating directory /data/db/29000/_tmp
m29000| Wed Jun 13 22:29:08 [FileAllocator] done allocating datafile /data/db/29000/testDB.ns, size: 16MB, took 0.039 secs
m29000| Wed Jun 13 22:29:08 [FileAllocator] allocating new datafile /data/db/29000/testDB.0, filling with zeroes...
m29000| Wed Jun 13 22:29:08 [FileAllocator] done allocating datafile /data/db/29000/testDB.0, size: 16MB, took 0.036 secs
m29000| Wed Jun 13 22:29:08 [conn1] datafileheader::init initializing /data/db/29000/testDB.0 n:0
m29000| Wed Jun 13 22:29:08 [conn1] build index testDB.foo { _id: 1 }
m29000| Wed Jun 13 22:29:08 [conn1] build index done. scanned 0 total records. 0.014 secs
m29000| Wed Jun 13 22:29:08 [initandlisten] connection accepted from 127.0.0.1:35738 #2 (2 connections now open)
m30999| Wed Jun 13 22:29:08 [conn] going to add shard: { _id: "myShard", host: "localhost:29000" }
m30999| Wed Jun 13 22:29:08 [conn] couldn't find database [testDB] in config db
m30999| Wed Jun 13 22:29:08 [conn] put [testDB] on: myShard:localhost:29000
Wed Jun 13 22:29:08 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 29001 --dbpath /data/db/29001 --noprealloc --smallfiles --oplogSize 40 --nohttpinterface
m29001| note: noprealloc may hurt performance in many applications
m29001| Wed Jun 13 22:29:08
m29001| Wed Jun 13 22:29:08 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29001| Wed Jun 13 22:29:08
m29001| Wed Jun 13 22:29:08 [initandlisten] MongoDB starting : pid=9276 port=29001 dbpath=/data/db/29001 32-bit host=tp2.10gen.cc
m29001| Wed Jun 13 22:29:08 [initandlisten] _DEBUG build (which is slower)
m29001| Wed Jun 13 22:29:08 [initandlisten]
m29001| Wed Jun 13 22:29:08 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29001| Wed Jun 13 22:29:08 [initandlisten] ** Not recommended for production.
m29001| Wed Jun 13 22:29:08 [initandlisten]
m29001| Wed Jun 13 22:29:08 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29001| Wed Jun 13 22:29:08 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29001| Wed Jun 13 22:29:08 [initandlisten] ** with --journal, the limit is lower
m29001| Wed Jun 13 22:29:08 [initandlisten]
m29001| Wed Jun 13 22:29:08 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29001| Wed Jun 13 22:29:08 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29001| Wed Jun 13 22:29:08 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m29001| Wed Jun 13 22:29:08 [initandlisten] options: { dbpath: "/data/db/29001", nohttpinterface: true, noprealloc: true, oplogSize: 40, port: 29001, smallfiles: true }
m29001| Wed Jun 13 22:29:08 [initandlisten] opening db: local
m29001| Wed Jun 13 22:29:08 [initandlisten] waiting for connections on port 29001
m29001| Wed Jun 13 22:29:08 [initandlisten] connection accepted from 127.0.0.1:59031 #1 (1 connection now open)
m29001| Wed Jun 13 22:29:08 [conn1] opening db: otherDB
m29001| Wed Jun 13 22:29:08 [FileAllocator] allocating new datafile /data/db/29001/otherDB.ns, filling with zeroes...
m29001| Wed Jun 13 22:29:08 [FileAllocator] creating directory /data/db/29001/_tmp
m29001| Wed Jun 13 22:29:08 [FileAllocator] done allocating datafile /data/db/29001/otherDB.ns, size: 16MB, took 0.037 secs
m29001| Wed Jun 13 22:29:08 [FileAllocator] allocating new datafile /data/db/29001/otherDB.0, filling with zeroes...
m29001| Wed Jun 13 22:29:08 [FileAllocator] done allocating datafile /data/db/29001/otherDB.0, size: 16MB, took 0.035 secs
m29001| Wed Jun 13 22:29:08 [conn1] datafileheader::init initializing /data/db/29001/otherDB.0 n:0
m29001| Wed Jun 13 22:29:08 [conn1] build index otherDB.foo { _id: 1 }
m29001| Wed Jun 13 22:29:08 [conn1] build index done. scanned 0 total records. 0 secs
m29001| Wed Jun 13 22:29:08 [conn1] opening db: testDB
m29001| Wed Jun 13 22:29:08 [FileAllocator] allocating new datafile /data/db/29001/testDB.ns, filling with zeroes...
m29001| Wed Jun 13 22:29:08 [FileAllocator] done allocating datafile /data/db/29001/testDB.ns, size: 16MB, took 0.043 secs
m29001| Wed Jun 13 22:29:08 [FileAllocator] allocating new datafile /data/db/29001/testDB.0, filling with zeroes...
m29001| Wed Jun 13 22:29:08 [FileAllocator] done allocating datafile /data/db/29001/testDB.0, size: 16MB, took 0.037 secs
m29001| Wed Jun 13 22:29:08 [conn1] datafileheader::init initializing /data/db/29001/testDB.0 n:0
m29001| Wed Jun 13 22:29:08 [conn1] build index testDB.foo { _id: 1 }
m29001| Wed Jun 13 22:29:09 [conn1] build index done. scanned 0 total records. 0.36 secs
m29001| Wed Jun 13 22:29:09 [conn1] insert testDB.foo keyUpdates:0 locks(micros) w:529492 448ms
m30999| Wed Jun 13 22:29:09 [conn] creating WriteBackListener for: localhost:29000 serverID: 4fd95a838108a29420109a76
m29000| Wed Jun 13 22:29:09 [initandlisten] connection accepted from 127.0.0.1:35741 #3 (3 connections now open)
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "testDB", "partitioned" : false, "primary" : "myShard" }
m29001| Wed Jun 13 22:29:09 [initandlisten] connection accepted from 127.0.0.1:59033 #2 (2 connections now open)
m30999| Wed Jun 13 22:29:09 [conn] addshard request { addshard: "localhost:29001", name: "rejectedShard" } failed: can't add shard localhost:29001 because a local database 'testDB' exists in another myShard:localhost:29000
m30999| Wed Jun 13 22:29:09 [conn] couldn't find database [otherDB] in config db
m29000| Wed Jun 13 22:29:09 [initandlisten] connection accepted from 127.0.0.1:35743 #4 (4 connections now open)
m30999| Wed Jun 13 22:29:09 [conn] put [otherDB] on: myShard:localhost:29000
m29000| Wed Jun 13 22:29:09 [conn3] _DEBUG ReadContext db wasn't open, will try to open otherDB.foo
m29000| Wed Jun 13 22:29:09 [conn3] opening db: otherDB
m30999| Wed Jun 13 22:29:09 [conn] Moving testDB primary from: myShard:localhost:29000 to: shard0000:localhost:30000
m30999| Wed Jun 13 22:29:09 [conn] created new distributed lock for testDB-movePrimary on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Wed Jun 13 22:29:09 [conn] distributed lock 'testDB-movePrimary/tp2.10gen.cc:30999:1339644547:1804289383' acquired, ts : 4fd95a858108a29420109a78
m30000| Wed Jun 13 22:29:09 [conn5] opening db: testDB
m29000| Wed Jun 13 22:29:09 [initandlisten] connection accepted from 127.0.0.1:35744 #5 (5 connections now open)
m30000| Wed Jun 13 22:29:09 [FileAllocator] allocating new datafile /data/db/add_shard10/testDB.ns, filling with zeroes...
m30000| Wed Jun 13 22:29:09 [FileAllocator] done allocating datafile /data/db/add_shard10/testDB.ns, size: 16MB, took 0.046 secs
m30000| Wed Jun 13 22:29:09 [FileAllocator] allocating new datafile /data/db/add_shard10/testDB.0, filling with zeroes...
m30000| Wed Jun 13 22:29:09 [FileAllocator] done allocating datafile /data/db/add_shard10/testDB.0, size: 16MB, took 0.038 secs
m30000| Wed Jun 13 22:29:09 [conn5] datafileheader::init initializing /data/db/add_shard10/testDB.0 n:0
m30000| Wed Jun 13 22:29:09 [FileAllocator] allocating new datafile /data/db/add_shard10/testDB.1, filling with zeroes...
m30000| Wed Jun 13 22:29:09 [conn5] build index testDB.foo { _id: 1 }
m30000| Wed Jun 13 22:29:09 [conn5] fastBuildIndex dupsToDrop:0
m30000| Wed Jun 13 22:29:09 [conn5] build index done. scanned 3 total records. 0 secs
m29000| Wed Jun 13 22:29:09 [conn5] end connection 127.0.0.1:35744 (4 connections now open)
m30999| Wed Jun 13 22:29:09 [conn] movePrimary dropping database on localhost:29000, no sharded collections in testDB
m29000| Wed Jun 13 22:29:09 [conn4] dropDatabase testDB
m30999| Wed Jun 13 22:29:09 [conn] distributed lock 'testDB-movePrimary/tp2.10gen.cc:30999:1339644547:1804289383' unlocked.
m30000| Wed Jun 13 22:29:09 [conn7] build index testDB.foo { a: 1.0 }
m30000| Wed Jun 13 22:29:09 [conn7] build index done. scanned 3 total records. 0 secs
m30999| Wed Jun 13 22:29:09 [conn] enabling sharding on: testDB
m30999| Wed Jun 13 22:29:09 [conn] CMD: shardcollection: { shardcollection: "testDB.foo", key: { a: 1.0 } }
m30999| Wed Jun 13 22:29:09 [conn] enable sharding on: testDB.foo with shard key: { a: 1.0 }
m30999| Wed Jun 13 22:29:09 [conn] going to create 1 chunk(s) for: testDB.foo using new epoch 4fd95a858108a29420109a79
m30000| Wed Jun 13 22:29:09 [initandlisten] connection accepted from 127.0.0.1:56775 #8 (8 connections now open)
m30999| Wed Jun 13 22:29:09 [conn] ChunkManager: time to load chunks for testDB.foo: 0ms sequenceNumber: 2 version: 1|0||4fd95a858108a29420109a79 based on: (empty)
m30999| Wed Jun 13 22:29:09 [conn] DEV WARNING appendDate() called with a tiny (but nonzero) date
m30000| Wed Jun 13 22:29:09 [conn6] build index config.collections { _id: 1 }
m30000| Wed Jun 13 22:29:09 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:09 [conn7] no current chunk manager found for this shard, will initialize
m30999| Wed Jun 13 22:29:09 [conn] splitting: testDB.foo shard: ns:testDB.foo at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { a: MinKey } max: { a: MaxKey }
m30000| Wed Jun 13 22:29:09 [initandlisten] connection accepted from 127.0.0.1:56776 #9 (9 connections now open)
m30000| Wed Jun 13 22:29:09 [conn5] received splitChunk request: { splitChunk: "testDB.foo", keyPattern: { a: 1.0 }, min: { a: MinKey }, max: { a: MaxKey }, from: "shard0000", splitKeys: [ { a: 1.0 } ], shardId: "testDB.foo-a_MinKey", configdb: "localhost:30000" }
m30000| Wed Jun 13 22:29:09 [conn5] created new distributed lock for testDB.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:29:09 [LockPinger] creating distributed lock ping thread for localhost:30000 and process tp2.10gen.cc:30000:1339644549:978854455 (sleeping for 30000ms)
m30000| Wed Jun 13 22:29:09 [initandlisten] connection accepted from 127.0.0.1:56777 #10 (10 connections now open)
m30000| Wed Jun 13 22:29:09 [conn5] distributed lock 'testDB.foo/tp2.10gen.cc:30000:1339644549:978854455' acquired, ts : 4fd95a85c35d70bd7a8bcc1a
m30000| Wed Jun 13 22:29:09 [conn5] splitChunk accepted at version 1|0||4fd95a858108a29420109a79
m30000| Wed Jun 13 22:29:09 [conn9] info PageFaultRetryableSection will not yield, already locked upon reaching
m30000| Wed Jun 13 22:29:09 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:29:09-0", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:56762", time: new Date(1339644549258), what: "split", ns: "testDB.foo", details: { before: { min: { a: MinKey }, max: { a: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: MinKey }, max: { a: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd95a858108a29420109a79') }, right: { min: { a: 1.0 }, max: { a: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd95a858108a29420109a79') } } }
m30000| Wed Jun 13 22:29:09 [conn5] distributed lock 'testDB.foo/tp2.10gen.cc:30000:1339644549:978854455' unlocked.
m30999| Wed Jun 13 22:29:09 [conn] ChunkManager: time to load chunks for testDB.foo: 0ms sequenceNumber: 3 version: 1|2||4fd95a858108a29420109a79 based on: 1|0||4fd95a858108a29420109a79
m30999| range.universal(): 1
m29000| Wed Jun 13 22:29:09 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Wed Jun 13 22:29:09 [interruptThread] now exiting
m29000| Wed Jun 13 22:29:09 dbexit:
m29000| Wed Jun 13 22:29:09 [interruptThread] shutdown: going to close listening sockets...
m29000| Wed Jun 13 22:29:09 [interruptThread] closing listening socket: 18
m29000| Wed Jun 13 22:29:09 [interruptThread] closing listening socket: 19
m29000| Wed Jun 13 22:29:09 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Wed Jun 13 22:29:09 [interruptThread] shutdown: going to flush diaglog...
m29000| Wed Jun 13 22:29:09 [interruptThread] shutdown: going to close sockets...
m29000| Wed Jun 13 22:29:09 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Wed Jun 13 22:29:09 [interruptThread] shutdown: closing all files...
m29000| Wed Jun 13 22:29:09 [interruptThread] closeAllFiles() finished
m29000| Wed Jun 13 22:29:09 [interruptThread] shutdown: removing fs lock...
m29000| Wed Jun 13 22:29:09 dbexit: really exiting now
m30999| Wed Jun 13 22:29:09 [WriteBackListener-localhost:29000] DBClientCursor::init call() failed
m30999| Wed Jun 13 22:29:09 [WriteBackListener-localhost:29000] dev: lastError==0 won't report:DBClientBase::findN: transport error: localhost:29000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd95a838108a29420109a76') }
m30999| Wed Jun 13 22:29:09 [WriteBackListener-localhost:29000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:29000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd95a838108a29420109a76') }
m30000| Wed Jun 13 22:29:09 [FileAllocator] done allocating datafile /data/db/add_shard10/testDB.1, size: 32MB, took 0.429 secs
Wed Jun 13 22:29:10 shell: stopped mongo program on port 29000
m29001| Wed Jun 13 22:29:10 got signal 15 (Terminated), will terminate after current cmd ends
m29001| Wed Jun 13 22:29:10 [interruptThread] now exiting
m29001| Wed Jun 13 22:29:10 dbexit:
m29001| Wed Jun 13 22:29:10 [interruptThread] shutdown: going to close listening sockets...
m29001| Wed Jun 13 22:29:10 [interruptThread] closing listening socket: 21
m29001| Wed Jun 13 22:29:10 [interruptThread] closing listening socket: 23
m29001| Wed Jun 13 22:29:10 [interruptThread] removing socket file: /tmp/mongodb-29001.sock
m29001| Wed Jun 13 22:29:10 [interruptThread] shutdown: going to flush diaglog...
m29001| Wed Jun 13 22:29:10 [interruptThread] shutdown: going to close sockets...
m29001| Wed Jun 13 22:29:10 [interruptThread] shutdown: waiting for fs preallocator...
m29001| Wed Jun 13 22:29:10 [interruptThread] shutdown: closing all files...
m29001| Wed Jun 13 22:29:10 [interruptThread] closeAllFiles() finished
m29001| Wed Jun 13 22:29:10 [interruptThread] shutdown: removing fs lock...
m29001| Wed Jun 13 22:29:10 dbexit: really exiting now
m30999| Wed Jun 13 22:29:10 [WriteBackListener-localhost:29000] DBClientCursor::init call() failed
m30999| Wed Jun 13 22:29:10 [WriteBackListener-localhost:29000] dev: lastError==0 won't report:DBClientBase::findN: transport error: localhost:29000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd95a838108a29420109a76') }
m30999| Wed Jun 13 22:29:10 [WriteBackListener-localhost:29000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:29000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd95a838108a29420109a76') }
Wed Jun 13 22:29:11 shell: stopped mongo program on port 29001
m30999| Wed Jun 13 22:29:11 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Wed Jun 13 22:29:11 [conn3] end connection 127.0.0.1:56760 (9 connections now open)
m30000| Wed Jun 13 22:29:11 [conn6] end connection 127.0.0.1:56763 (9 connections now open)
m30000| Wed Jun 13 22:29:11 [conn8] end connection 127.0.0.1:56775 (9 connections now open)
m30000| Wed Jun 13 22:29:11 [conn7] end connection 127.0.0.1:56765 (8 connections now open)
m30000| Wed Jun 13 22:29:11 [conn5] end connection 127.0.0.1:56762 (5 connections now open)
Wed Jun 13 22:29:12 shell: stopped mongo program on port 30999
m30000| Wed Jun 13 22:29:12 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Wed Jun 13 22:29:12 [interruptThread] now exiting
m30000| Wed Jun 13 22:29:12 dbexit:
m30000| Wed Jun 13 22:29:12 [interruptThread] shutdown: going to close listening sockets...
m30000| Wed Jun 13 22:29:12 [interruptThread] closing listening socket: 11
m30000| Wed Jun 13 22:29:12 [interruptThread] closing listening socket: 12
m30000| Wed Jun 13 22:29:12 [interruptThread] closing listening socket: 13
m30000| Wed Jun 13 22:29:12 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Wed Jun 13 22:29:12 [interruptThread] shutdown: going to flush diaglog...
m30000| Wed Jun 13 22:29:12 [interruptThread] shutdown: going to close sockets...
m30000| Wed Jun 13 22:29:12 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Wed Jun 13 22:29:12 [interruptThread] shutdown: closing all files...
m30000| Wed Jun 13 22:29:12 [conn10] end connection 127.0.0.1:56777 (4 connections now open)
m30000| Wed Jun 13 22:29:12 [conn9] end connection 127.0.0.1:56776 (3 connections now open)
m30000| Wed Jun 13 22:29:12 [interruptThread] closeAllFiles() finished
m30000| Wed Jun 13 22:29:12 [interruptThread] shutdown: removing fs lock...
m30000| Wed Jun 13 22:29:12 dbexit: really exiting now
Wed Jun 13 22:29:13 shell: stopped mongo program on port 30000
*** ShardingTest add_shard1 completed successfully in 5.691 seconds ***
5731.584072ms
Wed Jun 13 22:29:13 [initandlisten] connection accepted from 127.0.0.1:53818 #2 (1 connection now open)
*******************************************
Test : addshard2.js ...
Command : /home/yellow/buildslave/Linux_32bit_debug/mongo/mongo --port 27999 --nodb /home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/addshard2.js --eval TestData = new Object();TestData.testPath = "/home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/addshard2.js";TestData.testFile = "addshard2.js";TestData.testName = "addshard2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Wed Jun 13 22:29:13 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/add_shard20'
Wed Jun 13 22:29:13 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30000 --dbpath /data/db/add_shard20
m30000| Wed Jun 13 22:29:13
m30000| Wed Jun 13 22:29:13 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Wed Jun 13 22:29:13
m30000| Wed Jun 13 22:29:13 [initandlisten] MongoDB starting : pid=9309 port=30000 dbpath=/data/db/add_shard20 32-bit host=tp2.10gen.cc
m30000| Wed Jun 13 22:29:13 [initandlisten] _DEBUG build (which is slower)
m30000| Wed Jun 13 22:29:13 [initandlisten]
m30000| Wed Jun 13 22:29:13 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Wed Jun 13 22:29:13 [initandlisten] ** Not recommended for production.
m30000| Wed Jun 13 22:29:13 [initandlisten]
m30000| Wed Jun 13 22:29:13 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Wed Jun 13 22:29:13 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Wed Jun 13 22:29:13 [initandlisten] ** with --journal, the limit is lower
m30000| Wed Jun 13 22:29:13 [initandlisten]
m30000| Wed Jun 13 22:29:13 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Wed Jun 13 22:29:13 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Wed Jun 13 22:29:13 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30000| Wed Jun 13 22:29:13 [initandlisten] options: { dbpath: "/data/db/add_shard20", port: 30000 }
m30000| Wed Jun 13 22:29:13 [initandlisten] opening db: local
m30000| Wed Jun 13 22:29:13 [initandlisten] waiting for connections on port 30000
m30000| Wed Jun 13 22:29:13 [websvr] admin web console waiting for connections on port 31000
m30000| Wed Jun 13 22:29:13 [initandlisten] connection accepted from 127.0.0.1:56780 #1 (1 connection now open)
"tp2.10gen.cc:30000"
m30000| Wed Jun 13 22:29:13 [initandlisten] connection accepted from 184.173.149.242:50661 #2 (2 connections now open)
m30000| Wed Jun 13 22:29:13 [conn2] opening db: config
ShardingTest add_shard2 :
{
"config" : "tp2.10gen.cc:30000",
"shards" : [
connection to tp2.10gen.cc:30000
]
}
m30000| Wed Jun 13 22:29:13 [FileAllocator] allocating new datafile /data/db/add_shard20/config.ns, filling with zeroes...
m30000| Wed Jun 13 22:29:13 [FileAllocator] creating directory /data/db/add_shard20/_tmp
Wed Jun 13 22:29:13 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos --port 30999 --configdb tp2.10gen.cc:30000
m30999| Wed Jun 13 22:29:13 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Wed Jun 13 22:29:13 [mongosMain] MongoS version 2.1.2-pre- starting: pid=9324 port=30999 32-bit host=tp2.10gen.cc (--help for usage)
m30999| Wed Jun 13 22:29:13 [mongosMain] _DEBUG build
m30999| Wed Jun 13 22:29:13 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Wed Jun 13 22:29:13 [mongosMain] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30999| Wed Jun 13 22:29:13 [mongosMain] options: { configdb: "tp2.10gen.cc:30000", port: 30999 }
m30000| Wed Jun 13 22:29:13 [initandlisten] connection accepted from 184.173.149.242:50663 #3 (3 connections now open)
m30000| Wed Jun 13 22:29:13 [FileAllocator] done allocating datafile /data/db/add_shard20/config.ns, size: 16MB, took 0.035 secs
m30000| Wed Jun 13 22:29:13 [FileAllocator] allocating new datafile /data/db/add_shard20/config.0, filling with zeroes...
m30000| Wed Jun 13 22:29:13 [FileAllocator] done allocating datafile /data/db/add_shard20/config.0, size: 16MB, took 0.036 secs
m30000| Wed Jun 13 22:29:13 [conn2] datafileheader::init initializing /data/db/add_shard20/config.0 n:0
m30000| Wed Jun 13 22:29:13 [FileAllocator] allocating new datafile /data/db/add_shard20/config.1, filling with zeroes...
m30000| Wed Jun 13 22:29:13 [conn2] build index config.settings { _id: 1 }
m30000| Wed Jun 13 22:29:13 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:13 [initandlisten] connection accepted from 184.173.149.242:50664 #4 (4 connections now open)
m30000| Wed Jun 13 22:29:13 [initandlisten] connection accepted from 184.173.149.242:50665 #5 (5 connections now open)
m30000| Wed Jun 13 22:29:13 [conn5] build index config.version { _id: 1 }
m30000| Wed Jun 13 22:29:13 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:29:13 [mongosMain] waiting for connections on port 30999
m30000| Wed Jun 13 22:29:13 [conn4] build index config.chunks { _id: 1 }
m30000| Wed Jun 13 22:29:13 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:13 [conn4] info: creating collection config.chunks on add index
m30000| Wed Jun 13 22:29:13 [conn4] build index config.chunks { ns: 1, min: 1 }
m30000| Wed Jun 13 22:29:13 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:13 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Wed Jun 13 22:29:13 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:13 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Wed Jun 13 22:29:13 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:13 [conn4] build index config.shards { _id: 1 }
m30000| Wed Jun 13 22:29:13 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:13 [conn4] info: creating collection config.shards on add index
m30000| Wed Jun 13 22:29:13 [conn4] build index config.shards { host: 1 }
m30000| Wed Jun 13 22:29:13 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:29:13 [websvr] admin web console waiting for connections on port 31999
m30999| Wed Jun 13 22:29:13 [Balancer] about to contact config servers and shards
m30999| Wed Jun 13 22:29:13 [Balancer] config servers and shards contacted successfully
m30999| Wed Jun 13 22:29:13 [Balancer] balancer id: tp2.10gen.cc:30999 started at Jun 13 22:29:13
m30999| Wed Jun 13 22:29:13 [Balancer] created new distributed lock for balancer on tp2.10gen.cc:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:29:13 [conn5] build index config.mongos { _id: 1 }
m30000| Wed Jun 13 22:29:13 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:13 [initandlisten] connection accepted from 184.173.149.242:50666 #6 (6 connections now open)
m30999| Wed Jun 13 22:29:13 [LockPinger] creating distributed lock ping thread for tp2.10gen.cc:30000 and process tp2.10gen.cc:30999:1339644553:1804289383 (sleeping for 30000ms)
m30000| Wed Jun 13 22:29:13 [conn4] build index config.lockpings { _id: 1 }
m30000| Wed Jun 13 22:29:13 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:13 [conn6] build index config.locks { _id: 1 }
m30000| Wed Jun 13 22:29:13 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:29:13 [conn4] build index config.lockpings { ping: 1 }
m30000| Wed Jun 13 22:29:13 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Wed Jun 13 22:29:13 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644553:1804289383' acquired, ts : 4fd95a8949e9fe2fa8cdeabf
m30999| Wed Jun 13 22:29:13 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644553:1804289383' unlocked.
m30000| Wed Jun 13 22:29:13 [FileAllocator] done allocating datafile /data/db/add_shard20/config.1, size: 32MB, took 0.072 secs
m30999| Wed Jun 13 22:29:13 [mongosMain] connection accepted from 127.0.0.1:49944 #1 (1 connection now open)
ShardingTest undefined going to add shard : tp2.10gen.cc:30000
m30999| Wed Jun 13 22:29:13 [conn] couldn't find database [admin] in config db
m30000| Wed Jun 13 22:29:13 [conn4] build index config.databases { _id: 1 }
m30000| Wed Jun 13 22:29:13 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:29:13 [conn] put [admin] on: config:tp2.10gen.cc:30000
m30999| Wed Jun 13 22:29:13 [conn] going to add shard: { _id: "shard0000", host: "tp2.10gen.cc:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
Wed Jun 13 22:29:13 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30001 --dbpath /data/db/add_shard21 --noprealloc --smallfiles --oplogSize 40 --nohttpinterface
m30001| note: noprealloc may hurt performance in many applications
m30001| Wed Jun 13 22:29:13
m30001| Wed Jun 13 22:29:13 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Wed Jun 13 22:29:13
m30001| Wed Jun 13 22:29:13 [initandlisten] MongoDB starting : pid=9343 port=30001 dbpath=/data/db/add_shard21 32-bit host=tp2.10gen.cc
m30001| Wed Jun 13 22:29:13 [initandlisten] _DEBUG build (which is slower)
m30001| Wed Jun 13 22:29:13 [initandlisten]
m30001| Wed Jun 13 22:29:13 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Wed Jun 13 22:29:13 [initandlisten] ** Not recommended for production.
m30001| Wed Jun 13 22:29:13 [initandlisten]
m30001| Wed Jun 13 22:29:13 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Wed Jun 13 22:29:13 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Wed Jun 13 22:29:13 [initandlisten] ** with --journal, the limit is lower
m30001| Wed Jun 13 22:29:13 [initandlisten]
m30001| Wed Jun 13 22:29:13 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Wed Jun 13 22:29:13 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Wed Jun 13 22:29:13 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30001| Wed Jun 13 22:29:13 [initandlisten] options: { dbpath: "/data/db/add_shard21", nohttpinterface: true, noprealloc: true, oplogSize: 40, port: 30001, smallfiles: true }
m30001| Wed Jun 13 22:29:13 [initandlisten] opening db: local
m30001| Wed Jun 13 22:29:13 [initandlisten] waiting for connections on port 30001
m30001| Wed Jun 13 22:29:13 [initandlisten] connection accepted from 127.0.0.1:51051 #1 (1 connection now open)
Wed Jun 13 22:29:13 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30002 --dbpath /data/db/add_shard22 --noprealloc --smallfiles --oplogSize 40 --nohttpinterface
m30002| note: noprealloc may hurt performance in many applications
m30002| Wed Jun 13 22:29:13
m30002| Wed Jun 13 22:29:13 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Wed Jun 13 22:29:13
m30002| Wed Jun 13 22:29:13 [initandlisten] MongoDB starting : pid=9355 port=30002 dbpath=/data/db/add_shard22 32-bit host=tp2.10gen.cc
m30002| Wed Jun 13 22:29:13 [initandlisten] _DEBUG build (which is slower)
m30002| Wed Jun 13 22:29:13 [initandlisten]
m30002| Wed Jun 13 22:29:13 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Wed Jun 13 22:29:13 [initandlisten] ** Not recommended for production.
m30002| Wed Jun 13 22:29:13 [initandlisten]
m30002| Wed Jun 13 22:29:13 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Wed Jun 13 22:29:13 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Wed Jun 13 22:29:13 [initandlisten] ** with --journal, the limit is lower
m30002| Wed Jun 13 22:29:13 [initandlisten]
m30002| Wed Jun 13 22:29:13 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Wed Jun 13 22:29:13 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Wed Jun 13 22:29:13 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30002| Wed Jun 13 22:29:13 [initandlisten] options: { dbpath: "/data/db/add_shard22", nohttpinterface: true, noprealloc: true, oplogSize: 40, port: 30002, smallfiles: true }
m30002| Wed Jun 13 22:29:14 [initandlisten] opening db: local
m30002| Wed Jun 13 22:29:14 [initandlisten] waiting for connections on port 30002
m30002| Wed Jun 13 22:29:14 [initandlisten] connection accepted from 127.0.0.1:59200 #1 (1 connection now open)
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31200, 31201, 31202 ] 31200 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31200,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "add_shard2_rs1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "add_shard2_rs1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/add_shard2_rs1-0'
Wed Jun 13 22:29:14 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --port 31200 --noprealloc --smallfiles --rest --replSet add_shard2_rs1 --dbpath /data/db/add_shard2_rs1-0
m31200| note: noprealloc may hurt performance in many applications
m31200| Wed Jun 13 22:29:14
m31200| Wed Jun 13 22:29:14 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31200| Wed Jun 13 22:29:14
m31200| Wed Jun 13 22:29:14 [initandlisten] MongoDB starting : pid=9367 port=31200 dbpath=/data/db/add_shard2_rs1-0 32-bit host=tp2.10gen.cc
m31200| Wed Jun 13 22:29:14 [initandlisten] _DEBUG build (which is slower)
m31200| Wed Jun 13 22:29:14 [initandlisten]
m31200| Wed Jun 13 22:29:14 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31200| Wed Jun 13 22:29:14 [initandlisten] ** Not recommended for production.
m31200| Wed Jun 13 22:29:14 [initandlisten]
m31200| Wed Jun 13 22:29:14 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31200| Wed Jun 13 22:29:14 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31200| Wed Jun 13 22:29:14 [initandlisten] ** with --journal, the limit is lower
m31200| Wed Jun 13 22:29:14 [initandlisten]
m31200| Wed Jun 13 22:29:14 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31200| Wed Jun 13 22:29:14 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31200| Wed Jun 13 22:29:14 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31200| Wed Jun 13 22:29:14 [initandlisten] options: { dbpath: "/data/db/add_shard2_rs1-0", noprealloc: true, oplogSize: 40, port: 31200, replSet: "add_shard2_rs1", rest: true, smallfiles: true }
m31200| Wed Jun 13 22:29:14 [initandlisten] waiting for connections on port 31200
m31200| Wed Jun 13 22:29:14 [websvr] admin web console waiting for connections on port 32200
m31200| Wed Jun 13 22:29:14 [initandlisten] connection accepted from 184.173.149.242:40843 #1 (1 connection now open)
m31200| Wed Jun 13 22:29:14 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31200| Wed Jun 13 22:29:14 [conn1] opening db: local
m31200| Wed Jun 13 22:29:14 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31200| Wed Jun 13 22:29:14 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31200| Wed Jun 13 22:29:14 [initandlisten] connection accepted from 127.0.0.1:55835 #2 (2 connections now open)
[ connection to localhost:31200 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31200, 31201, 31202 ] 31201 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31201,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "add_shard2_rs1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "add_shard2_rs1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/add_shard2_rs1-1'
Wed Jun 13 22:29:14 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --port 31201 --noprealloc --smallfiles --rest --replSet add_shard2_rs1 --dbpath /data/db/add_shard2_rs1-1
m31201| note: noprealloc may hurt performance in many applications
m31201| Wed Jun 13 22:29:14
m31201| Wed Jun 13 22:29:14 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31201| Wed Jun 13 22:29:14
m31201| Wed Jun 13 22:29:14 [initandlisten] MongoDB starting : pid=9383 port=31201 dbpath=/data/db/add_shard2_rs1-1 32-bit host=tp2.10gen.cc
m31201| Wed Jun 13 22:29:14 [initandlisten] _DEBUG build (which is slower)
m31201| Wed Jun 13 22:29:14 [initandlisten]
m31201| Wed Jun 13 22:29:14 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31201| Wed Jun 13 22:29:14 [initandlisten] ** Not recommended for production.
m31201| Wed Jun 13 22:29:14 [initandlisten]
m31201| Wed Jun 13 22:29:14 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31201| Wed Jun 13 22:29:14 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31201| Wed Jun 13 22:29:14 [initandlisten] ** with --journal, the limit is lower
m31201| Wed Jun 13 22:29:14 [initandlisten]
m31201| Wed Jun 13 22:29:14 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31201| Wed Jun 13 22:29:14 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31201| Wed Jun 13 22:29:14 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31201| Wed Jun 13 22:29:14 [initandlisten] options: { dbpath: "/data/db/add_shard2_rs1-1", noprealloc: true, oplogSize: 40, port: 31201, replSet: "add_shard2_rs1", rest: true, smallfiles: true }
m31201| Wed Jun 13 22:29:14 [initandlisten] waiting for connections on port 31201
m31201| Wed Jun 13 22:29:14 [websvr] admin web console waiting for connections on port 32201
m31201| Wed Jun 13 22:29:14 [initandlisten] connection accepted from 184.173.149.242:59167 #1 (1 connection now open)
m31201| Wed Jun 13 22:29:14 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31201| Wed Jun 13 22:29:14 [conn1] opening db: local
m31201| Wed Jun 13 22:29:14 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31201| Wed Jun 13 22:29:14 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31201| Wed Jun 13 22:29:14 [initandlisten] connection accepted from 127.0.0.1:50965 #2 (2 connections now open)
[ connection to localhost:31200, connection to localhost:31201 ]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31200, 31201, 31202 ] 31202 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31202,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "add_shard2_rs1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "add_shard2_rs1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/add_shard2_rs1-2'
Wed Jun 13 22:29:14 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --port 31202 --noprealloc --smallfiles --rest --replSet add_shard2_rs1 --dbpath /data/db/add_shard2_rs1-2
m31202| note: noprealloc may hurt performance in many applications
m31202| Wed Jun 13 22:29:14
m31202| Wed Jun 13 22:29:14 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31202| Wed Jun 13 22:29:14
m31202| Wed Jun 13 22:29:14 [initandlisten] MongoDB starting : pid=9399 port=31202 dbpath=/data/db/add_shard2_rs1-2 32-bit host=tp2.10gen.cc
m31202| Wed Jun 13 22:29:14 [initandlisten] _DEBUG build (which is slower)
m31202| Wed Jun 13 22:29:14 [initandlisten]
m31202| Wed Jun 13 22:29:14 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31202| Wed Jun 13 22:29:14 [initandlisten] ** Not recommended for production.
m31202| Wed Jun 13 22:29:14 [initandlisten]
m31202| Wed Jun 13 22:29:14 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31202| Wed Jun 13 22:29:14 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31202| Wed Jun 13 22:29:14 [initandlisten] ** with --journal, the limit is lower
m31202| Wed Jun 13 22:29:14 [initandlisten]
m31202| Wed Jun 13 22:29:14 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31202| Wed Jun 13 22:29:14 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31202| Wed Jun 13 22:29:14 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31202| Wed Jun 13 22:29:14 [initandlisten] options: { dbpath: "/data/db/add_shard2_rs1-2", noprealloc: true, oplogSize: 40, port: 31202, replSet: "add_shard2_rs1", rest: true, smallfiles: true }
m31202| Wed Jun 13 22:29:14 [initandlisten] waiting for connections on port 31202
m31202| Wed Jun 13 22:29:14 [websvr] admin web console waiting for connections on port 32202
m31202| Wed Jun 13 22:29:14 [initandlisten] connection accepted from 184.173.149.242:42613 #1 (1 connection now open)
m31202| Wed Jun 13 22:29:14 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31202| Wed Jun 13 22:29:14 [conn1] opening db: local
m31202| Wed Jun 13 22:29:14 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31202| Wed Jun 13 22:29:14 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31202| Wed Jun 13 22:29:14 [initandlisten] connection accepted from 127.0.0.1:55005 #2 (2 connections now open)
[
connection to localhost:31200,
connection to localhost:31201,
connection to localhost:31202
]
{
"replSetInitiate" : {
"_id" : "add_shard2_rs1",
"members" : [
{
"_id" : 0,
"host" : "tp2.10gen.cc:31200"
},
{
"_id" : 1,
"host" : "tp2.10gen.cc:31201"
},
{
"_id" : 2,
"host" : "tp2.10gen.cc:31202"
}
]
}
}
m31200| Wed Jun 13 22:29:14 [conn2] replSet replSetInitiate admin command received from client
m31200| Wed Jun 13 22:29:14 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31201| Wed Jun 13 22:29:14 [initandlisten] connection accepted from 184.173.149.242:59172 #3 (3 connections now open)
m31202| Wed Jun 13 22:29:14 [initandlisten] connection accepted from 184.173.149.242:42616 #3 (3 connections now open)
m31200| Wed Jun 13 22:29:14 [conn2] replSet replSetInitiate all members seem up
m31200| Wed Jun 13 22:29:14 [conn2] ******
m31200| Wed Jun 13 22:29:14 [conn2] creating replication oplog of size: 40MB...
m31200| Wed Jun 13 22:29:14 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-0/local.ns, filling with zeroes...
m31200| Wed Jun 13 22:29:14 [FileAllocator] creating directory /data/db/add_shard2_rs1-0/_tmp
m31200| Wed Jun 13 22:29:14 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-0/local.ns, size: 16MB, took 0.036 secs
m31200| Wed Jun 13 22:29:14 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-0/local.0, filling with zeroes...
m31200| Wed Jun 13 22:29:14 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-0/local.0, size: 64MB, took 0.122 secs
m31200| Wed Jun 13 22:29:14 [conn2] datafileheader::init initializing /data/db/add_shard2_rs1-0/local.0 n:0
m31200| Wed Jun 13 22:29:14 [conn2] ******
m31200| Wed Jun 13 22:29:14 [conn2] replSet info saving a newer config version to local.system.replset
m31200| Wed Jun 13 22:29:14 [conn2] replSet saveConfigLocally done
m31200| Wed Jun 13 22:29:14 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31200| Wed Jun 13 22:29:14 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "add_shard2_rs1", members: [ { _id: 0.0, host: "tp2.10gen.cc:31200" }, { _id: 1.0, host: "tp2.10gen.cc:31201" }, { _id: 2.0, host: "tp2.10gen.cc:31202" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:177011 w:70 reslen:112 178ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m30999| Wed Jun 13 22:29:23 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644553:1804289383' acquired, ts : 4fd95a9349e9fe2fa8cdeac0
m30999| Wed Jun 13 22:29:23 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644553:1804289383' unlocked.
m31200| Wed Jun 13 22:29:24 [rsStart] replSet load config ok from self
m31200| Wed Jun 13 22:29:24 [rsStart] replSet I am tp2.10gen.cc:31200
m31200| Wed Jun 13 22:29:24 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31202
m31200| Wed Jun 13 22:29:24 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31201
m31200| Wed Jun 13 22:29:24 [rsStart] replSet STARTUP2
m31200| Wed Jun 13 22:29:24 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is up
m31200| Wed Jun 13 22:29:24 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is up
m31200| Wed Jun 13 22:29:24 [rsSync] replSet SECONDARY
m31200| Wed Jun 13 22:29:24 [rsMgr] replSet freshest returns { startupStatus: 3, info: "run rs.initiate(...) if not yet done for the set", errmsg: "can't get local.system.replset config from self or any seed (EMPTYCONFIG)", ok: 0.0 }
m31200| Wed Jun 13 22:29:24 [rsMgr] replSet freshest returns { startupStatus: 3, info: "run rs.initiate(...) if not yet done for the set", errmsg: "can't get local.system.replset config from self or any seed (EMPTYCONFIG)", ok: 0.0 }
m31200| Wed Jun 13 22:29:24 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes
m31201| Wed Jun 13 22:29:24 [rsStart] trying to contact tp2.10gen.cc:31200
m31200| Wed Jun 13 22:29:24 [initandlisten] connection accepted from 184.173.149.242:40853 #3 (3 connections now open)
m31201| Wed Jun 13 22:29:24 [rsStart] replSet load config ok from tp2.10gen.cc:31200
m31201| Wed Jun 13 22:29:24 [rsStart] replSet I am tp2.10gen.cc:31201
m31201| Wed Jun 13 22:29:24 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31202
m31201| Wed Jun 13 22:29:24 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31200
m31201| Wed Jun 13 22:29:24 [rsStart] replSet got config version 1 from a remote, saving locally
m31201| Wed Jun 13 22:29:24 [rsStart] replSet info saving a newer config version to local.system.replset
m31201| Wed Jun 13 22:29:24 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-1/local.ns, filling with zeroes...
m31201| Wed Jun 13 22:29:24 [FileAllocator] creating directory /data/db/add_shard2_rs1-1/_tmp
m31201| Wed Jun 13 22:29:24 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-1/local.ns, size: 16MB, took 0.037 secs
m31201| Wed Jun 13 22:29:24 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-1/local.0, filling with zeroes...
m31201| Wed Jun 13 22:29:24 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-1/local.0, size: 16MB, took 0.035 secs
m31201| Wed Jun 13 22:29:24 [rsStart] datafileheader::init initializing /data/db/add_shard2_rs1-1/local.0 n:0
m31201| Wed Jun 13 22:29:24 [rsStart] replSet saveConfigLocally done
m31201| Wed Jun 13 22:29:24 [rsStart] replSet STARTUP2
m31201| Wed Jun 13 22:29:24 [rsSync] ******
m31201| Wed Jun 13 22:29:24 [rsSync] creating replication oplog of size: 40MB...
m31201| Wed Jun 13 22:29:24 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-1/local.1, filling with zeroes...
m31202| Wed Jun 13 22:29:24 [rsStart] trying to contact tp2.10gen.cc:31200
m31200| Wed Jun 13 22:29:24 [initandlisten] connection accepted from 184.173.149.242:40854 #4 (4 connections now open)
m31202| Wed Jun 13 22:29:24 [rsStart] replSet load config ok from tp2.10gen.cc:31200
m31202| Wed Jun 13 22:29:24 [rsStart] replSet I am tp2.10gen.cc:31202
m31202| Wed Jun 13 22:29:24 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31201
m31202| Wed Jun 13 22:29:24 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31200
m31202| Wed Jun 13 22:29:24 [rsStart] replSet got config version 1 from a remote, saving locally
m31202| Wed Jun 13 22:29:24 [rsStart] replSet info saving a newer config version to local.system.replset
m31202| Wed Jun 13 22:29:24 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-2/local.ns, filling with zeroes...
m31202| Wed Jun 13 22:29:24 [FileAllocator] creating directory /data/db/add_shard2_rs1-2/_tmp
m31201| Wed Jun 13 22:29:24 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-1/local.1, size: 64MB, took 0.125 secs
m31201| Wed Jun 13 22:29:24 [rsSync] datafileheader::init initializing /data/db/add_shard2_rs1-1/local.1 n:1
m31201| Wed Jun 13 22:29:24 [rsSync] ******
m31201| Wed Jun 13 22:29:24 [rsSync] replSet initial sync pending
m31201| Wed Jun 13 22:29:24 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31202| Wed Jun 13 22:29:24 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-2/local.ns, size: 16MB, took 0.038 secs
m31202| Wed Jun 13 22:29:24 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-2/local.0, filling with zeroes...
m31202| Wed Jun 13 22:29:24 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-2/local.0, size: 16MB, took 0.038 secs
m31202| Wed Jun 13 22:29:24 [rsStart] datafileheader::init initializing /data/db/add_shard2_rs1-2/local.0 n:0
m31202| Wed Jun 13 22:29:24 [rsStart] replSet saveConfigLocally done
m31202| Wed Jun 13 22:29:24 [rsStart] replSet STARTUP2
m31202| Wed Jun 13 22:29:24 [rsSync] ******
m31202| Wed Jun 13 22:29:24 [rsSync] creating replication oplog of size: 40MB...
m31202| Wed Jun 13 22:29:24 [FileAllocator] allocating new datafile /data/db/add_shard2_rs1-2/local.1, filling with zeroes...
m31202| Wed Jun 13 22:29:24 [FileAllocator] done allocating datafile /data/db/add_shard2_rs1-2/local.1, size: 64MB, took 0.23 secs
m31202| Wed Jun 13 22:29:24 [rsSync] datafileheader::init initializing /data/db/add_shard2_rs1-2/local.1 n:1
m31202| Wed Jun 13 22:29:24 [rsSync] ******
m31202| Wed Jun 13 22:29:24 [rsSync] replSet initial sync pending
m31202| Wed Jun 13 22:29:24 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31200| Wed Jun 13 22:29:26 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state STARTUP2
m31200| Wed Jun 13 22:29:26 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state STARTUP2
m31200| Wed Jun 13 22:29:26 [rsMgr] not electing self, tp2.10gen.cc:31202 would veto
m31200| Wed Jun 13 22:29:26 [rsMgr] not electing self, tp2.10gen.cc:31202 would veto
m31202| Wed Jun 13 22:29:26 [initandlisten] connection accepted from 184.173.149.242:42619 #4 (4 connections now open)
m31201| Wed Jun 13 22:29:26 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is up
m31201| Wed Jun 13 22:29:26 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state SECONDARY
m31201| Wed Jun 13 22:29:26 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is up
m31201| Wed Jun 13 22:29:26 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state STARTUP2
m31202| Wed Jun 13 22:29:26 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is up
m31202| Wed Jun 13 22:29:26 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state SECONDARY
m31201| Wed Jun 13 22:29:26 [initandlisten] connection accepted from 184.173.149.242:59177 #4 (4 connections now open)
m31202| Wed Jun 13 22:29:26 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is up
m31202| Wed Jun 13 22:29:26 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state STARTUP2
m31200| Wed Jun 13 22:29:32 [rsMgr] replSet info electSelf 0
m31202| Wed Jun 13 22:29:32 [conn3] replSet received elect msg { replSetElect: 1, set: "add_shard2_rs1", who: "tp2.10gen.cc:31200", whoid: 0, cfgver: 1, round: ObjectId('4fd95a9caf983f945379f2e7') }
m31201| Wed Jun 13 22:29:32 [conn3] replSet received elect msg { replSetElect: 1, set: "add_shard2_rs1", who: "tp2.10gen.cc:31200", whoid: 0, cfgver: 1, round: ObjectId('4fd95a9caf983f945379f2e7') }
m31202| Wed Jun 13 22:29:32 [conn3] replSet RECOVERING
m31202| Wed Jun 13 22:29:32 [conn3] replSet info voting yea for tp2.10gen.cc:31200 (0)
m31201| Wed Jun 13 22:29:32 [conn3] replSet RECOVERING
m31201| Wed Jun 13 22:29:32 [conn3] replSet info voting yea for tp2.10gen.cc:31200 (0)
m31200| Wed Jun 13 22:29:32 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95a9caf983f945379f2e7'), ok: 1.0 }
m31200| Wed Jun 13 22:29:32 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95a9caf983f945379f2e7'), ok: 1.0 }
m31200| Wed Jun 13 22:29:32 [rsMgr] info PageFaultRetryableSection will not yield, already locked upon reaching
m31200| Wed Jun 13 22:29:32 [rsMgr] replSet PRIMARY
m31201| Wed Jun 13 22:29:32 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state PRIMARY
m31201| Wed Jun 13 22:29:32 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state RECOVERING
m31202| Wed Jun 13 22:29:32 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state PRIMARY
m31202| Wed Jun 13 22:29:32 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state RECOVERING
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31203, 31204, 31205 ] 31203 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31203,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "add_shard2_rs2",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "add_shard2_rs2"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/add_shard2_rs2-0'
Wed Jun 13 22:29:32 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --port 31203 --noprealloc --smallfiles --rest --replSet add_shard2_rs2 --dbpath /data/db/add_shard2_rs2-0
m31203| note: noprealloc may hurt performance in many applications
m31203| Wed Jun 13 22:29:33
m31203| Wed Jun 13 22:29:33 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31203| Wed Jun 13 22:29:33
m31203| Wed Jun 13 22:29:33 [initandlisten] MongoDB starting : pid=9458 port=31203 dbpath=/data/db/add_shard2_rs2-0 32-bit host=tp2.10gen.cc
m31203| Wed Jun 13 22:29:33 [initandlisten] _DEBUG build (which is slower)
m31203| Wed Jun 13 22:29:33 [initandlisten]
m31203| Wed Jun 13 22:29:33 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31203| Wed Jun 13 22:29:33 [initandlisten] ** Not recommended for production.
m31203| Wed Jun 13 22:29:33 [initandlisten]
m31203| Wed Jun 13 22:29:33 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31203| Wed Jun 13 22:29:33 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31203| Wed Jun 13 22:29:33 [initandlisten] ** with --journal, the limit is lower
m31203| Wed Jun 13 22:29:33 [initandlisten]
m31203| Wed Jun 13 22:29:33 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31203| Wed Jun 13 22:29:33 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31203| Wed Jun 13 22:29:33 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31203| Wed Jun 13 22:29:33 [initandlisten] options: { dbpath: "/data/db/add_shard2_rs2-0", noprealloc: true, oplogSize: 40, port: 31203, replSet: "add_shard2_rs2", rest: true, smallfiles: true }
m31203| Wed Jun 13 22:29:33 [initandlisten] waiting for connections on port 31203
m31203| Wed Jun 13 22:29:33 [websvr] admin web console waiting for connections on port 32203
m31203| Wed Jun 13 22:29:33 [initandlisten] connection accepted from 184.173.149.242:43031 #1 (1 connection now open)
m31203| Wed Jun 13 22:29:33 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31203| Wed Jun 13 22:29:33 [conn1] opening db: local
m31203| Wed Jun 13 22:29:33 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31203| Wed Jun 13 22:29:33 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31203| Wed Jun 13 22:29:33 [initandlisten] connection accepted from 127.0.0.1:52600 #2 (2 connections now open)
[ connection to localhost:31203 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31203, 31204, 31205 ] 31204 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31204,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "add_shard2_rs2",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "add_shard2_rs2"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/add_shard2_rs2-1'
Wed Jun 13 22:29:33 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --port 31204 --noprealloc --smallfiles --rest --replSet add_shard2_rs2 --dbpath /data/db/add_shard2_rs2-1
m31204| note: noprealloc may hurt performance in many applications
m31204| Wed Jun 13 22:29:33
m31204| Wed Jun 13 22:29:33 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31204| Wed Jun 13 22:29:33
m31204| Wed Jun 13 22:29:33 [initandlisten] MongoDB starting : pid=9474 port=31204 dbpath=/data/db/add_shard2_rs2-1 32-bit host=tp2.10gen.cc
m31204| Wed Jun 13 22:29:33 [initandlisten] _DEBUG build (which is slower)
m31204| Wed Jun 13 22:29:33 [initandlisten]
m31204| Wed Jun 13 22:29:33 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31204| Wed Jun 13 22:29:33 [initandlisten] ** Not recommended for production.
m31204| Wed Jun 13 22:29:33 [initandlisten]
m31204| Wed Jun 13 22:29:33 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31204| Wed Jun 13 22:29:33 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31204| Wed Jun 13 22:29:33 [initandlisten] ** with --journal, the limit is lower
m31204| Wed Jun 13 22:29:33 [initandlisten]
m31204| Wed Jun 13 22:29:33 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31204| Wed Jun 13 22:29:33 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31204| Wed Jun 13 22:29:33 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31204| Wed Jun 13 22:29:33 [initandlisten] options: { dbpath: "/data/db/add_shard2_rs2-1", noprealloc: true, oplogSize: 40, port: 31204, replSet: "add_shard2_rs2", rest: true, smallfiles: true }
m31204| Wed Jun 13 22:29:33 [initandlisten] waiting for connections on port 31204
m31204| Wed Jun 13 22:29:33 [websvr] admin web console waiting for connections on port 32204
m31204| Wed Jun 13 22:29:33 [initandlisten] connection accepted from 184.173.149.242:60048 #1 (1 connection now open)
m31204| Wed Jun 13 22:29:33 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31204| Wed Jun 13 22:29:33 [conn1] opening db: local
m31204| Wed Jun 13 22:29:33 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31204| Wed Jun 13 22:29:33 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31204| Wed Jun 13 22:29:33 [initandlisten] connection accepted from 127.0.0.1:33263 #2 (2 connections now open)
[ connection to localhost:31203, connection to localhost:31204 ]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31203, 31204, 31205 ] 31205 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31205,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "add_shard2_rs2",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "add_shard2_rs2"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/add_shard2_rs2-2'
Wed Jun 13 22:29:33 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --port 31205 --noprealloc --smallfiles --rest --replSet add_shard2_rs2 --dbpath /data/db/add_shard2_rs2-2
m31205| note: noprealloc may hurt performance in many applications
m31205| Wed Jun 13 22:29:33
m31205| Wed Jun 13 22:29:33 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31205| Wed Jun 13 22:29:33
m31205| Wed Jun 13 22:29:33 [initandlisten] MongoDB starting : pid=9490 port=31205 dbpath=/data/db/add_shard2_rs2-2 32-bit host=tp2.10gen.cc
m31205| Wed Jun 13 22:29:33 [initandlisten] _DEBUG build (which is slower)
m31205| Wed Jun 13 22:29:33 [initandlisten]
m31205| Wed Jun 13 22:29:33 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31205| Wed Jun 13 22:29:33 [initandlisten] ** Not recommended for production.
m31205| Wed Jun 13 22:29:33 [initandlisten]
m31205| Wed Jun 13 22:29:33 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31205| Wed Jun 13 22:29:33 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31205| Wed Jun 13 22:29:33 [initandlisten] ** with --journal, the limit is lower
m31205| Wed Jun 13 22:29:33 [initandlisten]
m31205| Wed Jun 13 22:29:33 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31205| Wed Jun 13 22:29:33 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31205| Wed Jun 13 22:29:33 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31205| Wed Jun 13 22:29:33 [initandlisten] options: { dbpath: "/data/db/add_shard2_rs2-2", noprealloc: true, oplogSize: 40, port: 31205, replSet: "add_shard2_rs2", rest: true, smallfiles: true }
m31205| Wed Jun 13 22:29:33 [initandlisten] waiting for connections on port 31205
m31205| Wed Jun 13 22:29:33 [websvr] admin web console waiting for connections on port 32205
m31205| Wed Jun 13 22:29:33 [initandlisten] connection accepted from 184.173.149.242:53594 #1 (1 connection now open)
m31205| Wed Jun 13 22:29:33 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31205| Wed Jun 13 22:29:33 [conn1] opening db: local
m31205| Wed Jun 13 22:29:33 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31205| Wed Jun 13 22:29:33 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31205| Wed Jun 13 22:29:33 [initandlisten] connection accepted from 127.0.0.1:33244 #2 (2 connections now open)
[
connection to localhost:31203,
connection to localhost:31204,
connection to localhost:31205
]
{
"replSetInitiate" : {
"_id" : "add_shard2_rs2",
"members" : [
{
"_id" : 0,
"host" : "tp2.10gen.cc:31203"
},
{
"_id" : 1,
"host" : "tp2.10gen.cc:31204"
},
{
"_id" : 2,
"host" : "tp2.10gen.cc:31205"
}
]
}
}
m31203| Wed Jun 13 22:29:33 [conn2] replSet replSetInitiate admin command received from client
m31203| Wed Jun 13 22:29:33 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31204| Wed Jun 13 22:29:33 [initandlisten] connection accepted from 184.173.149.242:60053 #3 (3 connections now open)
m31205| Wed Jun 13 22:29:33 [initandlisten] connection accepted from 184.173.149.242:53597 #3 (3 connections now open)
m31203| Wed Jun 13 22:29:33 [conn2] replSet replSetInitiate all members seem up
m31203| Wed Jun 13 22:29:33 [conn2] ******
m31203| Wed Jun 13 22:29:33 [conn2] creating replication oplog of size: 40MB...
m31203| Wed Jun 13 22:29:33 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-0/local.ns, filling with zeroes...
m31203| Wed Jun 13 22:29:33 [FileAllocator] creating directory /data/db/add_shard2_rs2-0/_tmp
m31203| Wed Jun 13 22:29:33 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-0/local.ns, size: 16MB, took 0.037 secs
m31203| Wed Jun 13 22:29:33 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-0/local.0, filling with zeroes...
m30999| Wed Jun 13 22:29:33 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644553:1804289383' acquired, ts : 4fd95a9d49e9fe2fa8cdeac1
m30999| Wed Jun 13 22:29:33 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644553:1804289383' unlocked.
m31203| Wed Jun 13 22:29:33 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-0/local.0, size: 64MB, took 0.12 secs
m31203| Wed Jun 13 22:29:33 [conn2] datafileheader::init initializing /data/db/add_shard2_rs2-0/local.0 n:0
m31203| Wed Jun 13 22:29:33 [conn2] ******
m31203| Wed Jun 13 22:29:33 [conn2] replSet info saving a newer config version to local.system.replset
m31203| Wed Jun 13 22:29:33 [conn2] replSet saveConfigLocally done
m31203| Wed Jun 13 22:29:33 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31203| Wed Jun 13 22:29:33 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "add_shard2_rs2", members: [ { _id: 0.0, host: "tp2.10gen.cc:31203" }, { _id: 1.0, host: "tp2.10gen.cc:31204" }, { _id: 2.0, host: "tp2.10gen.cc:31205" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:179539 w:70 reslen:112 180ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m31200| Wed Jun 13 22:29:34 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state RECOVERING
m31200| Wed Jun 13 22:29:34 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state RECOVERING
m31201| Wed Jun 13 22:29:38 [conn3] end connection 184.173.149.242:59172 (3 connections now open)
m31201| Wed Jun 13 22:29:38 [initandlisten] connection accepted from 184.173.149.242:59195 #5 (4 connections now open)
m31200| Wed Jun 13 22:29:40 [conn3] end connection 184.173.149.242:40853 (3 connections now open)
m31200| Wed Jun 13 22:29:40 [initandlisten] connection accepted from 184.173.149.242:40875 #5 (4 connections now open)
m31200| Wed Jun 13 22:29:40 [conn4] end connection 184.173.149.242:40854 (3 connections now open)
m31200| Wed Jun 13 22:29:40 [initandlisten] connection accepted from 184.173.149.242:40876 #6 (4 connections now open)
m31201| Wed Jun 13 22:29:40 [rsSync] replSet initial sync pending
m31201| Wed Jun 13 22:29:40 [rsSync] replSet syncing to: tp2.10gen.cc:31200
m31200| Wed Jun 13 22:29:40 [initandlisten] connection accepted from 184.173.149.242:40877 #7 (5 connections now open)
m31201| Wed Jun 13 22:29:40 [rsSync] build index local.me { _id: 1 }
m31201| Wed Jun 13 22:29:40 [rsSync] build index done. scanned 0 total records. 0.007 secs
m31201| Wed Jun 13 22:29:40 [rsSync] replSet initial sync drop all databases
m31201| Wed Jun 13 22:29:40 [rsSync] dropAllDatabasesExceptLocal 1
m31201| Wed Jun 13 22:29:40 [rsSync] replSet initial sync clone all databases
m31201| Wed Jun 13 22:29:40 [rsSync] replSet initial sync data copy, starting syncup
m31201| Wed Jun 13 22:29:40 [rsSync] replSet initial sync building indexes
m31201| Wed Jun 13 22:29:40 [rsSync] replSet initial sync query minValid
m31201| Wed Jun 13 22:29:40 [rsSync] replSet initial sync finishing up
m31201| Wed Jun 13 22:29:40 [rsSync] replSet set minValid=4fd95a8a:b
m31201| Wed Jun 13 22:29:40 [rsSync] build index local.replset.minvalid { _id: 1 }
m31201| Wed Jun 13 22:29:40 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Wed Jun 13 22:29:40 [rsSync] replSet initial sync done
m31200| Wed Jun 13 22:29:40 [conn7] end connection 184.173.149.242:40877 (4 connections now open)
m31202| Wed Jun 13 22:29:40 [rsSync] replSet initial sync pending
m31202| Wed Jun 13 22:29:40 [rsSync] replSet syncing to: tp2.10gen.cc:31200
m31200| Wed Jun 13 22:29:40 [initandlisten] connection accepted from 184.173.149.242:40878 #8 (5 connections now open)
m31202| Wed Jun 13 22:29:40 [rsSync] build index local.me { _id: 1 }
m31202| Wed Jun 13 22:29:40 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Wed Jun 13 22:29:40 [rsSync] replSet initial sync drop all databases
m31202| Wed Jun 13 22:29:40 [rsSync] dropAllDatabasesExceptLocal 1
m31202| Wed Jun 13 22:29:40 [rsSync] replSet initial sync clone all databases
m31202| Wed Jun 13 22:29:40 [rsSync] replSet initial sync data copy, starting syncup
m31202| Wed Jun 13 22:29:40 [rsSync] replSet initial sync building indexes
m31202| Wed Jun 13 22:29:40 [rsSync] replSet initial sync query minValid
m31202| Wed Jun 13 22:29:40 [rsSync] replSet initial sync finishing up
m31202| Wed Jun 13 22:29:40 [rsSync] replSet set minValid=4fd95a8a:b
m31202| Wed Jun 13 22:29:40 [rsSync] build index local.replset.minvalid { _id: 1 }
m31202| Wed Jun 13 22:29:40 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Wed Jun 13 22:29:40 [rsSync] replSet initial sync done
m31200| Wed Jun 13 22:29:40 [conn8] end connection 184.173.149.242:40878 (4 connections now open)
m31201| Wed Jun 13 22:29:41 [rsBackgroundSync] replSet syncing to: tp2.10gen.cc:31200
m31200| Wed Jun 13 22:29:41 [initandlisten] connection accepted from 184.173.149.242:40879 #9 (5 connections now open)
m31201| Wed Jun 13 22:29:41 [rsBackgroundSync] replSet remoteOldestOp: Jun 13 22:29:14 4fd95a8a:b
m31201| Wed Jun 13 22:29:41 [rsBackgroundSync] replSet lastOpTimeFetched: Jun 13 22:29:14 4fd95a8a:b
m31200| Wed Jun 13 22:29:41 [conn9] query has no more but tailable, cursorid: 4192076646754948750
m31201| Wed Jun 13 22:29:41 [rsSyncNotifier] replset setting oplog notifier to tp2.10gen.cc:31200
m31200| Wed Jun 13 22:29:41 [initandlisten] connection accepted from 184.173.149.242:40880 #10 (6 connections now open)
m31200| Wed Jun 13 22:29:41 [conn10] query has no more but tailable, cursorid: 1526610798871975716
m31201| Wed Jun 13 22:29:41 [rsSync] replSet SECONDARY
m31202| Wed Jun 13 22:29:41 [rsBackgroundSync] replSet syncing to: tp2.10gen.cc:31200
m31200| Wed Jun 13 22:29:41 [initandlisten] connection accepted from 184.173.149.242:40881 #11 (7 connections now open)
m31202| Wed Jun 13 22:29:41 [rsBackgroundSync] replSet remoteOldestOp: Jun 13 22:29:14 4fd95a8a:b
m31202| Wed Jun 13 22:29:41 [rsBackgroundSync] replSet lastOpTimeFetched: Jun 13 22:29:14 4fd95a8a:b
m31200| Wed Jun 13 22:29:41 [conn11] query has no more but tailable, cursorid: 1412182759753521000
m31202| Wed Jun 13 22:29:41 [rsSyncNotifier] replset setting oplog notifier to tp2.10gen.cc:31200
m31200| Wed Jun 13 22:29:41 [initandlisten] connection accepted from 184.173.149.242:40882 #12 (8 connections now open)
m31200| Wed Jun 13 22:29:41 [conn12] query has no more but tailable, cursorid: 1599704910252349532
m31202| Wed Jun 13 22:29:41 [rsSync] replSet SECONDARY
m31200| Wed Jun 13 22:29:42 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state SECONDARY
m31200| Wed Jun 13 22:29:42 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state SECONDARY
m31201| Wed Jun 13 22:29:42 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state SECONDARY
m31202| Wed Jun 13 22:29:42 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state SECONDARY
m31200| Wed Jun 13 22:29:42 [slaveTracking] build index local.slaves { _id: 1 }
m31200| Wed Jun 13 22:29:42 [slaveTracking] build index done. scanned 0 total records. 0 secs
m31203| Wed Jun 13 22:29:43 [rsStart] replSet load config ok from self
m31203| Wed Jun 13 22:29:43 [rsStart] replSet I am tp2.10gen.cc:31203
m31203| Wed Jun 13 22:29:43 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31205
m31203| Wed Jun 13 22:29:43 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31204
m31203| Wed Jun 13 22:29:43 [rsStart] replSet STARTUP2
m31203| Wed Jun 13 22:29:43 [rsHealthPoll] replSet member tp2.10gen.cc:31205 is up
m31203| Wed Jun 13 22:29:43 [rsHealthPoll] replSet member tp2.10gen.cc:31204 is up
m31203| Wed Jun 13 22:29:43 [rsSync] replSet SECONDARY
m31204| Wed Jun 13 22:29:43 [rsStart] trying to contact tp2.10gen.cc:31203
m31203| Wed Jun 13 22:29:43 [initandlisten] connection accepted from 184.173.149.242:43056 #3 (3 connections now open)
m31204| Wed Jun 13 22:29:43 [rsStart] replSet load config ok from tp2.10gen.cc:31203
m31204| Wed Jun 13 22:29:43 [rsStart] replSet I am tp2.10gen.cc:31204
m31204| Wed Jun 13 22:29:43 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31205
m31204| Wed Jun 13 22:29:43 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31203
m31204| Wed Jun 13 22:29:43 [rsStart] replSet got config version 1 from a remote, saving locally
m31204| Wed Jun 13 22:29:43 [rsStart] replSet info saving a newer config version to local.system.replset
m31204| Wed Jun 13 22:29:43 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-1/local.ns, filling with zeroes...
m31204| Wed Jun 13 22:29:43 [FileAllocator] creating directory /data/db/add_shard2_rs2-1/_tmp
m31204| Wed Jun 13 22:29:43 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-1/local.ns, size: 16MB, took 0.039 secs
m31204| Wed Jun 13 22:29:43 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-1/local.0, filling with zeroes...
m31204| Wed Jun 13 22:29:43 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-1/local.0, size: 16MB, took 0.037 secs
m31204| Wed Jun 13 22:29:43 [rsStart] datafileheader::init initializing /data/db/add_shard2_rs2-1/local.0 n:0
m31204| Wed Jun 13 22:29:43 [rsStart] replSet saveConfigLocally done
m31204| Wed Jun 13 22:29:43 [rsStart] replSet STARTUP2
m31204| Wed Jun 13 22:29:43 [rsSync] ******
m31204| Wed Jun 13 22:29:43 [rsSync] creating replication oplog of size: 40MB...
m31204| Wed Jun 13 22:29:43 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-1/local.1, filling with zeroes...
m31205| Wed Jun 13 22:29:43 [rsStart] trying to contact tp2.10gen.cc:31203
m31203| Wed Jun 13 22:29:43 [initandlisten] connection accepted from 184.173.149.242:43057 #4 (4 connections now open)
m31205| Wed Jun 13 22:29:43 [rsStart] replSet load config ok from tp2.10gen.cc:31203
m31205| Wed Jun 13 22:29:43 [rsStart] replSet I am tp2.10gen.cc:31205
m31205| Wed Jun 13 22:29:43 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31204
m31205| Wed Jun 13 22:29:43 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31203
m31205| Wed Jun 13 22:29:43 [rsStart] replSet got config version 1 from a remote, saving locally
m31205| Wed Jun 13 22:29:43 [rsStart] replSet info saving a newer config version to local.system.replset
m31205| Wed Jun 13 22:29:43 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-2/local.ns, filling with zeroes...
m31205| Wed Jun 13 22:29:43 [FileAllocator] creating directory /data/db/add_shard2_rs2-2/_tmp
m31204| Wed Jun 13 22:29:43 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-1/local.1, size: 64MB, took 0.132 secs
m31204| Wed Jun 13 22:29:43 [rsSync] datafileheader::init initializing /data/db/add_shard2_rs2-1/local.1 n:1
m31204| Wed Jun 13 22:29:43 [rsSync] ******
m31204| Wed Jun 13 22:29:43 [rsSync] replSet initial sync pending
m31204| Wed Jun 13 22:29:43 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31205| Wed Jun 13 22:29:43 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-2/local.ns, size: 16MB, took 0.033 secs
m31205| Wed Jun 13 22:29:43 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-2/local.0, filling with zeroes...
m31205| Wed Jun 13 22:29:43 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-2/local.0, size: 16MB, took 0.039 secs
m31205| Wed Jun 13 22:29:43 [rsStart] datafileheader::init initializing /data/db/add_shard2_rs2-2/local.0 n:0
m31205| Wed Jun 13 22:29:43 [rsStart] replSet saveConfigLocally done
m31205| Wed Jun 13 22:29:43 [rsStart] replSet STARTUP2
m31205| Wed Jun 13 22:29:43 [rsSync] ******
m31205| Wed Jun 13 22:29:43 [rsSync] creating replication oplog of size: 40MB...
m31205| Wed Jun 13 22:29:43 [FileAllocator] allocating new datafile /data/db/add_shard2_rs2-2/local.1, filling with zeroes...
m30999| Wed Jun 13 22:29:43 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644553:1804289383' acquired, ts : 4fd95aa749e9fe2fa8cdeac2
m30999| Wed Jun 13 22:29:43 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644553:1804289383' unlocked.
m31205| Wed Jun 13 22:29:43 [FileAllocator] done allocating datafile /data/db/add_shard2_rs2-2/local.1, size: 64MB, took 0.325 secs
m31205| Wed Jun 13 22:29:43 [rsSync] datafileheader::init initializing /data/db/add_shard2_rs2-2/local.1 n:1
m31205| Wed Jun 13 22:29:43 [rsSync] ******
m31205| Wed Jun 13 22:29:43 [rsSync] replSet initial sync pending
m31205| Wed Jun 13 22:29:43 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31203| Wed Jun 13 22:29:45 [rsHealthPoll] replSet member tp2.10gen.cc:31205 is now in state STARTUP2
m31203| Wed Jun 13 22:29:45 [rsHealthPoll] replSet member tp2.10gen.cc:31204 is now in state STARTUP2
m31203| Wed Jun 13 22:29:45 [rsMgr] not electing self, tp2.10gen.cc:31205 would veto
m31203| Wed Jun 13 22:29:45 [rsMgr] not electing self, tp2.10gen.cc:31205 would veto
m31205| Wed Jun 13 22:29:45 [initandlisten] connection accepted from 184.173.149.242:53615 #4 (4 connections now open)
m31204| Wed Jun 13 22:29:45 [rsHealthPoll] replSet member tp2.10gen.cc:31203 is up
m31204| Wed Jun 13 22:29:45 [rsHealthPoll] replSet member tp2.10gen.cc:31203 is now in state SECONDARY
m31204| Wed Jun 13 22:29:45 [rsHealthPoll] replSet member tp2.10gen.cc:31205 is up
m31204| Wed Jun 13 22:29:45 [rsHealthPoll] replSet member tp2.10gen.cc:31205 is now in state STARTUP2
m31205| Wed Jun 13 22:29:45 [rsHealthPoll] replSet member tp2.10gen.cc:31203 is up
m31205| Wed Jun 13 22:29:45 [rsHealthPoll] replSet member tp2.10gen.cc:31203 is now in state SECONDARY
m31204| Wed Jun 13 22:29:45 [initandlisten] connection accepted from 184.173.149.242:60073 #4 (4 connections now open)
m31205| Wed Jun 13 22:29:45 [rsHealthPoll] replSet member tp2.10gen.cc:31204 is up
m31205| Wed Jun 13 22:29:45 [rsHealthPoll] replSet member tp2.10gen.cc:31204 is now in state STARTUP2
m31203| Wed Jun 13 22:29:51 [rsMgr] replSet info electSelf 0
m31204| Wed Jun 13 22:29:51 [conn3] replSet received elect msg { replSetElect: 1, set: "add_shard2_rs2", who: "tp2.10gen.cc:31203", whoid: 0, cfgver: 1, round: ObjectId('4fd95aafb091e07457736469') }
m31205| Wed Jun 13 22:29:51 [conn3] replSet received elect msg { replSetElect: 1, set: "add_shard2_rs2", who: "tp2.10gen.cc:31203", whoid: 0, cfgver: 1, round: ObjectId('4fd95aafb091e07457736469') }
m31204| Wed Jun 13 22:29:51 [conn3] replSet RECOVERING
m31205| Wed Jun 13 22:29:51 [conn3] replSet RECOVERING
m31204| Wed Jun 13 22:29:51 [conn3] replSet info voting yea for tp2.10gen.cc:31203 (0)
m31205| Wed Jun 13 22:29:51 [conn3] replSet info voting yea for tp2.10gen.cc:31203 (0)
m31203| Wed Jun 13 22:29:51 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95aafb091e07457736469'), ok: 1.0 }
m31203| Wed Jun 13 22:29:51 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95aafb091e07457736469'), ok: 1.0 }
m31203| Wed Jun 13 22:29:51 [rsMgr] info PageFaultRetryableSection will not yield, already locked upon reaching
m31203| Wed Jun 13 22:29:51 [rsMgr] replSet PRIMARY
m31204| Wed Jun 13 22:29:51 [rsHealthPoll] replSet member tp2.10gen.cc:31203 is now in state PRIMARY
m31204| Wed Jun 13 22:29:51 [rsHealthPoll] replSet member tp2.10gen.cc:31205 is now in state RECOVERING
m31205| Wed Jun 13 22:29:51 [rsHealthPoll] replSet member tp2.10gen.cc:31203 is now in state PRIMARY
m31205| Wed Jun 13 22:29:51 [rsHealthPoll] replSet member tp2.10gen.cc:31204 is now in state RECOVERING
m30001| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:37696 #2 (2 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] going to add shard: { _id: "bar", host: "tp2.10gen.cc:30001" }
m30999| Wed Jun 13 22:29:51 [conn] creating WriteBackListener for: tp2.10gen.cc:30001 serverID: 4fd95a8949e9fe2fa8cdeabe
m30001| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:37697 #3 (3 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] creating WriteBackListener for: tp2.10gen.cc:30000 serverID: 4fd95a8949e9fe2fa8cdeabe
m30000| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:50719 #7 (7 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] starting new replica set monitor for replica set add_shard2_rs1 with seed of tp2.10gen.cc:31200
m30999| Wed Jun 13 22:29:51 [conn] successfully connected to seed tp2.10gen.cc:31200 for replica set add_shard2_rs1
m31200| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:40890 #13 (9 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] changing hosts to { 0: "tp2.10gen.cc:31200", 1: "tp2.10gen.cc:31202", 2: "tp2.10gen.cc:31201" } from add_shard2_rs1/
m30999| Wed Jun 13 22:29:51 [conn] trying to add new host tp2.10gen.cc:31200 to replica set add_shard2_rs1
m30999| Wed Jun 13 22:29:51 [conn] successfully connected to new host tp2.10gen.cc:31200 in replica set add_shard2_rs1
m30999| Wed Jun 13 22:29:51 [conn] trying to add new host tp2.10gen.cc:31201 to replica set add_shard2_rs1
m31200| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:40891 #14 (10 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] successfully connected to new host tp2.10gen.cc:31201 in replica set add_shard2_rs1
m30999| Wed Jun 13 22:29:51 [conn] trying to add new host tp2.10gen.cc:31202 to replica set add_shard2_rs1
m31201| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:59213 #6 (5 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] successfully connected to new host tp2.10gen.cc:31202 in replica set add_shard2_rs1
m31202| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:42657 #5 (5 connections now open)
m31200| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:40894 #15 (11 connections now open)
m31200| Wed Jun 13 22:29:51 [conn13] end connection 184.173.149.242:40890 (10 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] Primary for replica set add_shard2_rs1 changed to tp2.10gen.cc:31200
m31201| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:59216 #7 (6 connections now open)
m31202| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:42660 #6 (6 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] replica set monitor for replica set add_shard2_rs1 started, address is add_shard2_rs1/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202
m30999| Wed Jun 13 22:29:51 [ReplicaSetMonitorWatcher] starting
m31200| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:40897 #16 (11 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] going to add shard: { _id: "add_shard2_rs1", host: "add_shard2_rs1/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202" }
m30999| Wed Jun 13 22:29:51 [conn] starting new replica set monitor for replica set add_shard2_rs2 with seed of tp2.10gen.cc:31203
m30999| Wed Jun 13 22:29:51 [conn] successfully connected to seed tp2.10gen.cc:31203 for replica set add_shard2_rs2
m31203| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:43071 #5 (5 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] changing hosts to { 0: "tp2.10gen.cc:31203", 1: "tp2.10gen.cc:31205", 2: "tp2.10gen.cc:31204" } from add_shard2_rs2/
m30999| Wed Jun 13 22:29:51 [conn] trying to add new host tp2.10gen.cc:31203 to replica set add_shard2_rs2
m30999| Wed Jun 13 22:29:51 [conn] successfully connected to new host tp2.10gen.cc:31203 in replica set add_shard2_rs2
m30999| Wed Jun 13 22:29:51 [conn] trying to add new host tp2.10gen.cc:31204 to replica set add_shard2_rs2
m31203| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:43072 #6 (6 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] successfully connected to new host tp2.10gen.cc:31204 in replica set add_shard2_rs2
m30999| Wed Jun 13 22:29:51 [conn] trying to add new host tp2.10gen.cc:31205 to replica set add_shard2_rs2
m31204| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:60087 #5 (5 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] successfully connected to new host tp2.10gen.cc:31205 in replica set add_shard2_rs2
m31205| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:53631 #5 (5 connections now open)
m31203| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:43075 #7 (7 connections now open)
m31203| Wed Jun 13 22:29:51 [conn5] end connection 184.173.149.242:43071 (6 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] Primary for replica set add_shard2_rs2 changed to tp2.10gen.cc:31203
m31204| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:60090 #6 (6 connections now open)
m31205| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:53634 #6 (6 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] replica set monitor for replica set add_shard2_rs2 started, address is add_shard2_rs2/tp2.10gen.cc:31203,tp2.10gen.cc:31204,tp2.10gen.cc:31205
m31203| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:43078 #8 (7 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] going to add shard: { _id: "myshard", host: "add_shard2_rs2/tp2.10gen.cc:31203,tp2.10gen.cc:31204,tp2.10gen.cc:31205" }
m30002| Wed Jun 13 22:29:51 [initandlisten] connection accepted from 184.173.149.242:49298 #2 (2 connections now open)
m30999| Wed Jun 13 22:29:51 [conn] going to add shard: { _id: "shard0001", host: "tp2.10gen.cc:30002" }
m30999| Wed Jun 13 22:29:51 [conn] addshard request { addshard: "add_shard2_rs2/NonExistingHost:31203" } failed: in seed list add_shard2_rs2/NonExistingHost:31203, host NonExistingHost:31203 does not belong to replica set add_shard2_rs2
m30999| Wed Jun 13 22:29:51 [conn] addshard request { addshard: "add_shard2_rs2/tp2.10gen.cc:31203,foo:9999" } failed: in seed list add_shard2_rs2/tp2.10gen.cc:31203,foo:9999, host foo:9999 does not belong to replica set add_shard2_rs2
m30999| Wed Jun 13 22:29:51 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Wed Jun 13 22:29:51 [conn4] end connection 184.173.149.242:50664 (6 connections now open)
m31200| Wed Jun 13 22:29:51 [conn14] end connection 184.173.149.242:40891 (10 connections now open)
m31200| Wed Jun 13 22:29:51 [conn15] end connection 184.173.149.242:40894 (10 connections now open)
m31200| Wed Jun 13 22:29:51 [conn16] end connection 184.173.149.242:40897 (10 connections now open)
m31201| Wed Jun 13 22:29:51 [conn6] end connection 184.173.149.242:59213 (5 connections now open)
m31202| Wed Jun 13 22:29:51 [conn5] end connection 184.173.149.242:42657 (5 connections now open)
m31202| Wed Jun 13 22:29:51 [conn6] end connection 184.173.149.242:42660 (5 connections now open)
m30000| Wed Jun 13 22:29:51 [conn6] end connection 184.173.149.242:50666 (5 connections now open)
m30000| Wed Jun 13 22:29:51 [conn3] end connection 184.173.149.242:50663 (6 connections now open)
m31204| Wed Jun 13 22:29:51 [conn6] end connection 184.173.149.242:60090 (5 connections now open)
m30000| Wed Jun 13 22:29:51 [conn7] end connection 184.173.149.242:50719 (3 connections now open)
m31203| Wed Jun 13 22:29:51 [conn6] end connection 184.173.149.242:43072 (6 connections now open)
m31204| Wed Jun 13 22:29:51 [conn5] end connection 184.173.149.242:60087 (5 connections now open)
m31205| Wed Jun 13 22:29:51 [conn5] end connection 184.173.149.242:53631 (5 connections now open)
m31201| Wed Jun 13 22:29:51 [conn7] end connection 184.173.149.242:59216 (4 connections now open)
m30001| Wed Jun 13 22:29:51 [conn3] end connection 184.173.149.242:37697 (2 connections now open)
m31205| Wed Jun 13 22:29:51 [conn6] end connection 184.173.149.242:53634 (4 connections now open)
m31203| Wed Jun 13 22:29:51 [conn7] end connection 184.173.149.242:43075 (5 connections now open)
m30002| Wed Jun 13 22:29:51 [conn2] end connection 184.173.149.242:49298 (1 connection now open)
m31203| Wed Jun 13 22:29:51 [conn8] end connection 184.173.149.242:43078 (5 connections now open)
m31202| Wed Jun 13 22:29:52 [conn3] end connection 184.173.149.242:42616 (3 connections now open)
m31202| Wed Jun 13 22:29:52 [initandlisten] connection accepted from 184.173.149.242:42671 #7 (4 connections now open)
Wed Jun 13 22:29:52 shell: stopped mongo program on port 30999
m30000| Wed Jun 13 22:29:52 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Wed Jun 13 22:29:52 [interruptThread] now exiting
m30000| Wed Jun 13 22:29:52 dbexit:
m30000| Wed Jun 13 22:29:52 [interruptThread] shutdown: going to close listening sockets...
m30000| Wed Jun 13 22:29:52 [interruptThread] closing listening socket: 11
m30000| Wed Jun 13 22:29:52 [interruptThread] closing listening socket: 12
m30000| Wed Jun 13 22:29:52 [interruptThread] closing listening socket: 14
m30000| Wed Jun 13 22:29:52 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Wed Jun 13 22:29:52 [interruptThread] shutdown: going to flush diaglog...
m30000| Wed Jun 13 22:29:52 [interruptThread] shutdown: going to close sockets...
m30000| Wed Jun 13 22:29:52 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Wed Jun 13 22:29:52 [interruptThread] shutdown: closing all files...
m30000| Wed Jun 13 22:29:52 [interruptThread] closeAllFiles() finished
m30000| Wed Jun 13 22:29:52 [interruptThread] shutdown: removing fs lock...
m30000| Wed Jun 13 22:29:52 dbexit: really exiting now
m31203| Wed Jun 13 22:29:53 [rsHealthPoll] replSet member tp2.10gen.cc:31204 is now in state RECOVERING
m31203| Wed Jun 13 22:29:53 [rsHealthPoll] replSet member tp2.10gen.cc:31205 is now in state RECOVERING
Wed Jun 13 22:29:53 shell: stopped mongo program on port 30000
*** ShardingTest add_shard2 completed successfully in 40.499 seconds ***
ReplSetTest n: 0 ports: [ 31200, 31201, 31202 ] 31200 number
ReplSetTest stop *** Shutting down mongod in port 31200 ***
m31200| Wed Jun 13 22:29:53 got signal 15 (Terminated), will terminate after current cmd ends
m31200| Wed Jun 13 22:29:53 [interruptThread] now exiting
m31200| Wed Jun 13 22:29:53 dbexit:
m31200| Wed Jun 13 22:29:53 [interruptThread] shutdown: going to close listening sockets...
m31200| Wed Jun 13 22:29:53 [interruptThread] closing listening socket: 24
m31200| Wed Jun 13 22:29:53 [interruptThread] closing listening socket: 26
m31200| Wed Jun 13 22:29:53 [interruptThread] closing listening socket: 28
m31200| Wed Jun 13 22:29:53 [interruptThread] removing socket file: /tmp/mongodb-31200.sock
m31200| Wed Jun 13 22:29:53 [interruptThread] shutdown: going to flush diaglog...
m31200| Wed Jun 13 22:29:53 [interruptThread] shutdown: going to close sockets...
m31200| Wed Jun 13 22:29:53 [interruptThread] shutdown: waiting for fs preallocator...
m31200| Wed Jun 13 22:29:53 [interruptThread] shutdown: closing all files...
m31201| Wed Jun 13 22:29:53 [conn5] end connection 184.173.149.242:59195 (3 connections now open)
m31202| Wed Jun 13 22:29:53 [conn7] end connection 184.173.149.242:42671 (3 connections now open)
m31200| Wed Jun 13 22:29:53 [conn1] end connection 184.173.149.242:40843 (7 connections now open)
m31202| Wed Jun 13 22:29:53 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: tp2.10gen.cc:31200
m31201| Wed Jun 13 22:29:53 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: tp2.10gen.cc:31200
m31200| Wed Jun 13 22:29:53 [interruptThread] closeAllFiles() finished
m31200| Wed Jun 13 22:29:53 [interruptThread] shutdown: removing fs lock...
m31200| Wed Jun 13 22:29:53 dbexit: really exiting now
m31201| Wed Jun 13 22:29:54 [rsHealthPoll] DBClientCursor::init call() failed
m31202| Wed Jun 13 22:29:54 [conn4] end connection 184.173.149.242:42619 (2 connections now open)
m31201| Wed Jun 13 22:29:54 [rsHealthPoll] replSet info tp2.10gen.cc:31200 is down (or slow to respond): DBClientBase::findN: transport error: tp2.10gen.cc:31200 ns: admin.$cmd query: { replSetHeartbeat: "add_shard2_rs1", v: 1, pv: 1, checkEmpty: false, from: "tp2.10gen.cc:31201" }
m31201| Wed Jun 13 22:29:54 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state DOWN
m31202| Wed Jun 13 22:29:54 [initandlisten] connection accepted from 184.173.149.242:42672 #8 (3 connections now open)
m31201| Wed Jun 13 22:29:54 [rsMgr] not electing self, tp2.10gen.cc:31202 would veto
m31201| Wed Jun 13 22:29:54 [conn4] end connection 184.173.149.242:59177 (2 connections now open)
m31202| Wed Jun 13 22:29:54 [rsHealthPoll] DBClientCursor::init call() failed
m31201| Wed Jun 13 22:29:54 [initandlisten] connection accepted from 184.173.149.242:59230 #8 (3 connections now open)
m31202| Wed Jun 13 22:29:54 [rsHealthPoll] replSet info tp2.10gen.cc:31200 is down (or slow to respond): DBClientBase::findN: transport error: tp2.10gen.cc:31200 ns: admin.$cmd query: { replSetHeartbeat: "add_shard2_rs1", v: 1, pv: 1, checkEmpty: false, from: "tp2.10gen.cc:31202" }
m31202| Wed Jun 13 22:29:54 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state DOWN
m31202| Wed Jun 13 22:29:54 [rsMgr] replSet tie 1 sleeping a little 198ms
Wed Jun 13 22:29:54 shell: stopped mongo program on port 31200
ReplSetTest n: 1 ports: [ 31200, 31201, 31202 ] 31201 number
ReplSetTest stop *** Shutting down mongod in port 31201 ***
m31202| Wed Jun 13 22:29:54 [rsMgr] replSet not trying to elect self as responded yea to someone else recently
m31201| Wed Jun 13 22:29:54 got signal 15 (Terminated), will terminate after current cmd ends
m31201| Wed Jun 13 22:29:54 [interruptThread] now exiting
m31201| Wed Jun 13 22:29:54 dbexit:
m31201| Wed Jun 13 22:29:54 [interruptThread] shutdown: going to close listening sockets...
m31201| Wed Jun 13 22:29:54 [interruptThread] closing listening socket: 27
m31201| Wed Jun 13 22:29:54 [interruptThread] closing listening socket: 31
m31201| Wed Jun 13 22:29:54 [interruptThread] closing listening socket: 32
m31201| Wed Jun 13 22:29:54 [interruptThread] removing socket file: /tmp/mongodb-31201.sock
m31201| Wed Jun 13 22:29:54 [interruptThread] shutdown: going to flush diaglog...
m31201| Wed Jun 13 22:29:54 [interruptThread] shutdown: going to close sockets...
m31201| Wed Jun 13 22:29:54 [interruptThread] shutdown: waiting for fs preallocator...
m31201| Wed Jun 13 22:29:54 [interruptThread] shutdown: closing all files...
m31202| Wed Jun 13 22:29:54 [conn8] end connection 184.173.149.242:42672 (2 connections now open)
m31201| Wed Jun 13 22:29:54 [conn1] end connection 184.173.149.242:59167 (2 connections now open)
m31201| Wed Jun 13 22:29:54 [interruptThread] closeAllFiles() finished
m31201| Wed Jun 13 22:29:54 [interruptThread] shutdown: removing fs lock...
m31201| Wed Jun 13 22:29:54 dbexit: really exiting now
Wed Jun 13 22:29:55 shell: stopped mongo program on port 31201
ReplSetTest n: 2 ports: [ 31200, 31201, 31202 ] 31202 number
ReplSetTest stop *** Shutting down mongod in port 31202 ***
m31202| Wed Jun 13 22:29:55 got signal 15 (Terminated), will terminate after current cmd ends
m31202| Wed Jun 13 22:29:55 [interruptThread] now exiting
m31202| Wed Jun 13 22:29:55 dbexit:
m31202| Wed Jun 13 22:29:55 [interruptThread] shutdown: going to close listening sockets...
m31202| Wed Jun 13 22:29:55 [interruptThread] closing listening socket: 30
m31202| Wed Jun 13 22:29:55 [interruptThread] closing listening socket: 32
m31202| Wed Jun 13 22:29:55 [interruptThread] closing listening socket: 34
m31202| Wed Jun 13 22:29:55 [interruptThread] removing socket file: /tmp/mongodb-31202.sock
m31202| Wed Jun 13 22:29:55 [interruptThread] shutdown: going to flush diaglog...
m31202| Wed Jun 13 22:29:55 [interruptThread] shutdown: going to close sockets...
m31202| Wed Jun 13 22:29:55 [interruptThread] shutdown: waiting for fs preallocator...
m31202| Wed Jun 13 22:29:55 [interruptThread] shutdown: closing all files...
m31202| Wed Jun 13 22:29:55 [conn1] end connection 184.173.149.242:42613 (1 connection now open)
m31202| Wed Jun 13 22:29:55 [interruptThread] closeAllFiles() finished
m31202| Wed Jun 13 22:29:55 [interruptThread] shutdown: removing fs lock...
m31202| Wed Jun 13 22:29:55 dbexit: really exiting now
Wed Jun 13 22:29:56 shell: stopped mongo program on port 31202
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
ReplSetTest n: 0 ports: [ 31203, 31204, 31205 ] 31203 number
ReplSetTest stop *** Shutting down mongod in port 31203 ***
m31203| Wed Jun 13 22:29:56 got signal 15 (Terminated), will terminate after current cmd ends
m31203| Wed Jun 13 22:29:56 [interruptThread] now exiting
m31203| Wed Jun 13 22:29:56 dbexit:
m31203| Wed Jun 13 22:29:56 [interruptThread] shutdown: going to close listening sockets...
m31203| Wed Jun 13 22:29:56 [interruptThread] closing listening socket: 33
m31203| Wed Jun 13 22:29:56 [interruptThread] closing listening socket: 37
m31203| Wed Jun 13 22:29:56 [interruptThread] closing listening socket: 38
m31203| Wed Jun 13 22:29:56 [interruptThread] removing socket file: /tmp/mongodb-31203.sock
m31203| Wed Jun 13 22:29:56 [interruptThread] shutdown: going to flush diaglog...
m31203| Wed Jun 13 22:29:56 [interruptThread] shutdown: going to close sockets...
m31203| Wed Jun 13 22:29:56 [interruptThread] shutdown: waiting for fs preallocator...
m31203| Wed Jun 13 22:29:56 [interruptThread] shutdown: closing all files...
m31205| Wed Jun 13 22:29:56 [conn3] end connection 184.173.149.242:53597 (3 connections now open)
m31204| Wed Jun 13 22:29:56 [conn3] end connection 184.173.149.242:60053 (3 connections now open)
m31203| Wed Jun 13 22:29:56 [conn1] end connection 184.173.149.242:43031 (3 connections now open)
m31203| Wed Jun 13 22:29:56 [interruptThread] closeAllFiles() finished
m31203| Wed Jun 13 22:29:56 [interruptThread] shutdown: removing fs lock...
m31203| Wed Jun 13 22:29:56 dbexit: really exiting now
m31204| Wed Jun 13 22:29:57 [rsHealthPoll] DBClientCursor::init call() failed
m31204| Wed Jun 13 22:29:57 [rsHealthPoll] replSet info tp2.10gen.cc:31203 is down (or slow to respond): DBClientBase::findN: transport error: tp2.10gen.cc:31203 ns: admin.$cmd query: { replSetHeartbeat: "add_shard2_rs2", v: 1, pv: 1, checkEmpty: false, from: "tp2.10gen.cc:31204" }
m31204| Wed Jun 13 22:29:57 [rsHealthPoll] replSet member tp2.10gen.cc:31203 is now in state DOWN
m31205| Wed Jun 13 22:29:57 [rsHealthPoll] DBClientCursor::init call() failed
m31205| Wed Jun 13 22:29:57 [rsHealthPoll] replSet info tp2.10gen.cc:31203 is down (or slow to respond): DBClientBase::findN: transport error: tp2.10gen.cc:31203 ns: admin.$cmd query: { replSetHeartbeat: "add_shard2_rs2", v: 1, pv: 1, checkEmpty: false, from: "tp2.10gen.cc:31205" }
m31205| Wed Jun 13 22:29:57 [rsHealthPoll] replSet member tp2.10gen.cc:31203 is now in state DOWN
Wed Jun 13 22:29:57 shell: stopped mongo program on port 31203
ReplSetTest n: 1 ports: [ 31203, 31204, 31205 ] 31204 number
ReplSetTest stop *** Shutting down mongod in port 31204 ***
m31204| Wed Jun 13 22:29:57 got signal 15 (Terminated), will terminate after current cmd ends
m31204| Wed Jun 13 22:29:57 [interruptThread] now exiting
m31204| Wed Jun 13 22:29:57 dbexit:
m31204| Wed Jun 13 22:29:57 [interruptThread] shutdown: going to close listening sockets...
m31204| Wed Jun 13 22:29:57 [interruptThread] closing listening socket: 36
m31204| Wed Jun 13 22:29:57 [interruptThread] closing listening socket: 40
m31204| Wed Jun 13 22:29:57 [interruptThread] closing listening socket: 41
m31204| Wed Jun 13 22:29:57 [interruptThread] removing socket file: /tmp/mongodb-31204.sock
m31204| Wed Jun 13 22:29:57 [interruptThread] shutdown: going to flush diaglog...
m31204| Wed Jun 13 22:29:57 [interruptThread] shutdown: going to close sockets...
m31204| Wed Jun 13 22:29:57 [interruptThread] shutdown: waiting for fs preallocator...
m31204| Wed Jun 13 22:29:57 [interruptThread] shutdown: closing all files...
m31205| Wed Jun 13 22:29:57 [conn4] end connection 184.173.149.242:53615 (2 connections now open)
m31204| Wed Jun 13 22:29:57 [conn1] end connection 184.173.149.242:60048 (2 connections now open)
m31204| Wed Jun 13 22:29:57 [interruptThread] closeAllFiles() finished
m31204| Wed Jun 13 22:29:57 [interruptThread] shutdown: removing fs lock...
m31204| Wed Jun 13 22:29:57 dbexit: really exiting now
Wed Jun 13 22:29:58 shell: stopped mongo program on port 31204
ReplSetTest n: 2 ports: [ 31203, 31204, 31205 ] 31205 number
ReplSetTest stop *** Shutting down mongod in port 31205 ***
m31205| Wed Jun 13 22:29:58 got signal 15 (Terminated), will terminate after current cmd ends
m31205| Wed Jun 13 22:29:58 [interruptThread] now exiting
m31205| Wed Jun 13 22:29:58 dbexit:
m31205| Wed Jun 13 22:29:58 [interruptThread] shutdown: going to close listening sockets...
m31205| Wed Jun 13 22:29:58 [interruptThread] closing listening socket: 39
m31205| Wed Jun 13 22:29:58 [interruptThread] closing listening socket: 40
m31205| Wed Jun 13 22:29:58 [interruptThread] closing listening socket: 42
m31205| Wed Jun 13 22:29:58 [interruptThread] removing socket file: /tmp/mongodb-31205.sock
m31205| Wed Jun 13 22:29:58 [interruptThread] shutdown: going to flush diaglog...
m31205| Wed Jun 13 22:29:58 [interruptThread] shutdown: going to close sockets...
m31205| Wed Jun 13 22:29:58 [interruptThread] shutdown: waiting for fs preallocator...
m31205| Wed Jun 13 22:29:58 [interruptThread] shutdown: closing all files...
m31205| Wed Jun 13 22:29:58 [conn1] end connection 184.173.149.242:53594 (1 connection now open)
m31205| Wed Jun 13 22:29:58 [interruptThread] closeAllFiles() finished
m31205| Wed Jun 13 22:29:58 [interruptThread] shutdown: removing fs lock...
m31205| Wed Jun 13 22:29:58 dbexit: really exiting now
Wed Jun 13 22:29:59 shell: stopped mongo program on port 31205
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
48633.893013ms
Wed Jun 13 22:30:01 [initandlisten] connection accepted from 127.0.0.1:53900 #3 (2 connections now open)
*******************************************
Test : addshard3.js ...
Command : /home/yellow/buildslave/Linux_32bit_debug/mongo/mongo --port 27999 --nodb /home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/addshard3.js --eval TestData = new Object();TestData.testPath = "/home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/addshard3.js";TestData.testFile = "addshard3.js";TestData.testName = "addshard3";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Wed Jun 13 22:30:01 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/add_shard30'
Wed Jun 13 22:30:01 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30000 --dbpath /data/db/add_shard30
m30000| Wed Jun 13 22:30:01
m30000| Wed Jun 13 22:30:01 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Wed Jun 13 22:30:01
m30000| Wed Jun 13 22:30:02 [initandlisten] MongoDB starting : pid=9629 port=30000 dbpath=/data/db/add_shard30 32-bit host=tp2.10gen.cc
m30000| Wed Jun 13 22:30:02 [initandlisten] _DEBUG build (which is slower)
m30000| Wed Jun 13 22:30:02 [initandlisten]
m30000| Wed Jun 13 22:30:02 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Wed Jun 13 22:30:02 [initandlisten] ** Not recommended for production.
m30000| Wed Jun 13 22:30:02 [initandlisten]
m30000| Wed Jun 13 22:30:02 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Wed Jun 13 22:30:02 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Wed Jun 13 22:30:02 [initandlisten] ** with --journal, the limit is lower
m30000| Wed Jun 13 22:30:02 [initandlisten]
m30000| Wed Jun 13 22:30:02 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Wed Jun 13 22:30:02 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Wed Jun 13 22:30:02 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30000| Wed Jun 13 22:30:02 [initandlisten] options: { dbpath: "/data/db/add_shard30", port: 30000 }
m30000| Wed Jun 13 22:30:02 [initandlisten] opening db: local
m30000| Wed Jun 13 22:30:02 [initandlisten] waiting for connections on port 30000
m30000| Wed Jun 13 22:30:02 [websvr] admin web console waiting for connections on port 31000
m30000| Wed Jun 13 22:30:02 [initandlisten] connection accepted from 127.0.0.1:56862 #1 (1 connection now open)
"localhost:30000"
m30000| Wed Jun 13 22:30:02 [initandlisten] connection accepted from 127.0.0.1:56863 #2 (2 connections now open)
m30000| Wed Jun 13 22:30:02 [conn2] opening db: config
ShardingTest add_shard3 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000
]
}
m30000| Wed Jun 13 22:30:02 [FileAllocator] allocating new datafile /data/db/add_shard30/config.ns, filling with zeroes...
m30000| Wed Jun 13 22:30:02 [FileAllocator] creating directory /data/db/add_shard30/_tmp
Wed Jun 13 22:30:02 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos --port 30999 --configdb localhost:30000
m30999| Wed Jun 13 22:30:02 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Wed Jun 13 22:30:02 [mongosMain] MongoS version 2.1.2-pre- starting: pid=9644 port=30999 32-bit host=tp2.10gen.cc (--help for usage)
m30999| Wed Jun 13 22:30:02 [mongosMain] _DEBUG build
m30999| Wed Jun 13 22:30:02 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Wed Jun 13 22:30:02 [mongosMain] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30999| Wed Jun 13 22:30:02 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Wed Jun 13 22:30:02 [initandlisten] connection accepted from 127.0.0.1:56865 #3 (3 connections now open)
m30000| Wed Jun 13 22:30:02 [FileAllocator] done allocating datafile /data/db/add_shard30/config.ns, size: 16MB, took 0.04 secs
m30000| Wed Jun 13 22:30:02 [FileAllocator] allocating new datafile /data/db/add_shard30/config.0, filling with zeroes...
m30000| Wed Jun 13 22:30:02 [FileAllocator] done allocating datafile /data/db/add_shard30/config.0, size: 16MB, took 0.04 secs
m30000| Wed Jun 13 22:30:02 [conn2] datafileheader::init initializing /data/db/add_shard30/config.0 n:0
m30000| Wed Jun 13 22:30:02 [FileAllocator] allocating new datafile /data/db/add_shard30/config.1, filling with zeroes...
m30000| Wed Jun 13 22:30:02 [conn2] build index config.settings { _id: 1 }
m30000| Wed Jun 13 22:30:02 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:02 [initandlisten] connection accepted from 127.0.0.1:56866 #4 (4 connections now open)
m30000| Wed Jun 13 22:30:02 [initandlisten] connection accepted from 127.0.0.1:56867 #5 (5 connections now open)
m30000| Wed Jun 13 22:30:02 [conn5] build index config.version { _id: 1 }
m30000| Wed Jun 13 22:30:02 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:02 [conn4] build index config.chunks { _id: 1 }
m30999| Wed Jun 13 22:30:02 [mongosMain] waiting for connections on port 30999
m30000| Wed Jun 13 22:30:02 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:02 [conn4] info: creating collection config.chunks on add index
m30000| Wed Jun 13 22:30:02 [conn4] build index config.chunks { ns: 1, min: 1 }
m30000| Wed Jun 13 22:30:02 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:02 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Wed Jun 13 22:30:02 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:02 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Wed Jun 13 22:30:02 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:02 [conn4] build index config.shards { _id: 1 }
m30000| Wed Jun 13 22:30:02 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:02 [conn4] info: creating collection config.shards on add index
m30000| Wed Jun 13 22:30:02 [conn4] build index config.shards { host: 1 }
m30000| Wed Jun 13 22:30:02 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:30:02 [websvr] admin web console waiting for connections on port 31999
m30999| Wed Jun 13 22:30:02 [Balancer] about to contact config servers and shards
m30999| Wed Jun 13 22:30:02 [Balancer] config servers and shards contacted successfully
m30999| Wed Jun 13 22:30:02 [Balancer] balancer id: tp2.10gen.cc:30999 started at Jun 13 22:30:02
m30999| Wed Jun 13 22:30:02 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:30:02 [initandlisten] connection accepted from 127.0.0.1:56868 #6 (6 connections now open)
m30000| Wed Jun 13 22:30:02 [conn5] build index config.mongos { _id: 1 }
m30000| Wed Jun 13 22:30:02 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:30:02 [LockPinger] creating distributed lock ping thread for localhost:30000 and process tp2.10gen.cc:30999:1339644602:1804289383 (sleeping for 30000ms)
m30000| Wed Jun 13 22:30:02 [conn4] build index config.lockpings { _id: 1 }
m30000| Wed Jun 13 22:30:02 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:02 [conn6] build index config.locks { _id: 1 }
m30000| Wed Jun 13 22:30:02 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:02 [conn4] build index config.lockpings { ping: 1 }
m30000| Wed Jun 13 22:30:02 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Wed Jun 13 22:30:02 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644602:1804289383' acquired, ts : 4fd95aba7b29065b44129030
m30999| Wed Jun 13 22:30:02 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644602:1804289383' unlocked.
m30000| Wed Jun 13 22:30:02 [FileAllocator] done allocating datafile /data/db/add_shard30/config.1, size: 32MB, took 0.073 secs
m30999| Wed Jun 13 22:30:02 [mongosMain] connection accepted from 127.0.0.1:50026 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Wed Jun 13 22:30:02 [conn] couldn't find database [admin] in config db
m30000| Wed Jun 13 22:30:02 [conn4] build index config.databases { _id: 1 }
m30000| Wed Jun 13 22:30:02 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:30:02 [conn] put [admin] on: config:localhost:30000
m30999| Wed Jun 13 22:30:02 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
Wed Jun 13 22:30:06 [clientcursormon] mem (MB) res:20 virt:121 mapped:0
m30999| Wed Jun 13 22:30:10 [conn] DBClientCursor::init call() failed
m30999| Wed Jun 13 22:30:10 [conn] addshard request { addshard: "localhost:31000" } failed: couldn't connect to new shard DBClientBase::findN: transport error: localhost:31000 ns: admin.$cmd query: { getlasterror: 1 }
{
"ok" : 0,
"errmsg" : "couldn't connect to new shard DBClientBase::findN: transport error: localhost:31000 ns: admin.$cmd query: { getlasterror: 1 }"
}
10497.502089ms
Wed Jun 13 22:30:12 [initandlisten] connection accepted from 127.0.0.1:53911 #4 (3 connections now open)
*******************************************
Test : addshard4.js ...
Command : /home/yellow/buildslave/Linux_32bit_debug/mongo/mongo --port 27999 --nodb /home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/addshard4.js --eval TestData = new Object();TestData.testPath = "/home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/addshard4.js";TestData.testFile = "addshard4.js";TestData.testName = "addshard4";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Wed Jun 13 22:30:12 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/addshard40'
Wed Jun 13 22:30:12 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30000 --dbpath /data/db/addshard40
m30000| Wed Jun 13 22:30:12
m30000| Wed Jun 13 22:30:12 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Wed Jun 13 22:30:12
m30000| Wed Jun 13 22:30:12 [initandlisten] MongoDB starting : pid=9667 port=30000 dbpath=/data/db/addshard40 32-bit host=tp2.10gen.cc
m30000| Wed Jun 13 22:30:12 [initandlisten] _DEBUG build (which is slower)
m30000| Wed Jun 13 22:30:12 [initandlisten]
m30000| Wed Jun 13 22:30:12 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Wed Jun 13 22:30:12 [initandlisten] ** Not recommended for production.
m30000| Wed Jun 13 22:30:12 [initandlisten]
m30000| Wed Jun 13 22:30:12 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Wed Jun 13 22:30:12 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Wed Jun 13 22:30:12 [initandlisten] ** with --journal, the limit is lower
m30000| Wed Jun 13 22:30:12 [initandlisten]
m30000| Wed Jun 13 22:30:12 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Wed Jun 13 22:30:12 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Wed Jun 13 22:30:12 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30000| Wed Jun 13 22:30:12 [initandlisten] options: { dbpath: "/data/db/addshard40", port: 30000 }
m30000| Wed Jun 13 22:30:12 [initandlisten] opening db: local
m30000| Wed Jun 13 22:30:12 [initandlisten] waiting for connections on port 30000
m30000| Wed Jun 13 22:30:12 [websvr] admin web console waiting for connections on port 31000
m30000| Wed Jun 13 22:30:12 [initandlisten] connection accepted from 127.0.0.1:56873 #1 (1 connection now open)
Resetting db path '/data/db/addshard41'
Wed Jun 13 22:30:12 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30001 --dbpath /data/db/addshard41
m30001| Wed Jun 13 22:30:12
m30001| Wed Jun 13 22:30:12 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Wed Jun 13 22:30:12
m30001| Wed Jun 13 22:30:12 [initandlisten] MongoDB starting : pid=9680 port=30001 dbpath=/data/db/addshard41 32-bit host=tp2.10gen.cc
m30001| Wed Jun 13 22:30:12 [initandlisten] _DEBUG build (which is slower)
m30001| Wed Jun 13 22:30:12 [initandlisten]
m30001| Wed Jun 13 22:30:12 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Wed Jun 13 22:30:12 [initandlisten] ** Not recommended for production.
m30001| Wed Jun 13 22:30:12 [initandlisten]
m30001| Wed Jun 13 22:30:12 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Wed Jun 13 22:30:12 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Wed Jun 13 22:30:12 [initandlisten] ** with --journal, the limit is lower
m30001| Wed Jun 13 22:30:12 [initandlisten]
m30001| Wed Jun 13 22:30:12 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Wed Jun 13 22:30:12 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Wed Jun 13 22:30:12 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30001| Wed Jun 13 22:30:12 [initandlisten] options: { dbpath: "/data/db/addshard41", port: 30001 }
m30001| Wed Jun 13 22:30:12 [initandlisten] opening db: local
m30001| Wed Jun 13 22:30:12 [websvr] admin web console waiting for connections on port 31001
m30001| Wed Jun 13 22:30:12 [initandlisten] waiting for connections on port 30001
m30001| Wed Jun 13 22:30:12 [initandlisten] connection accepted from 127.0.0.1:51137 #1 (1 connection now open)
"tp2.10gen.cc:30000"
m30000| Wed Jun 13 22:30:12 [initandlisten] connection accepted from 184.173.149.242:50756 #2 (2 connections now open)
m30000| Wed Jun 13 22:30:12 [conn2] opening db: config
ShardingTest addshard4 :
{
"config" : "tp2.10gen.cc:30000",
"shards" : [
connection to tp2.10gen.cc:30000,
connection to tp2.10gen.cc:30001
]
}
m30000| Wed Jun 13 22:30:12 [FileAllocator] allocating new datafile /data/db/addshard40/config.ns, filling with zeroes...
m30000| Wed Jun 13 22:30:12 [FileAllocator] creating directory /data/db/addshard40/_tmp
Wed Jun 13 22:30:12 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos --port 30999 --configdb tp2.10gen.cc:30000
m30999| Wed Jun 13 22:30:12 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Wed Jun 13 22:30:12 [mongosMain] MongoS version 2.1.2-pre- starting: pid=9695 port=30999 32-bit host=tp2.10gen.cc (--help for usage)
m30999| Wed Jun 13 22:30:12 [mongosMain] _DEBUG build
m30999| Wed Jun 13 22:30:12 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Wed Jun 13 22:30:12 [mongosMain] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30999| Wed Jun 13 22:30:12 [mongosMain] options: { configdb: "tp2.10gen.cc:30000", port: 30999 }
m30000| Wed Jun 13 22:30:12 [initandlisten] connection accepted from 184.173.149.242:50758 #3 (3 connections now open)
m30000| Wed Jun 13 22:30:12 [FileAllocator] done allocating datafile /data/db/addshard40/config.ns, size: 16MB, took 0.039 secs
m30000| Wed Jun 13 22:30:12 [FileAllocator] allocating new datafile /data/db/addshard40/config.0, filling with zeroes...
m30000| Wed Jun 13 22:30:12 [FileAllocator] done allocating datafile /data/db/addshard40/config.0, size: 16MB, took 0.035 secs
m30000| Wed Jun 13 22:30:12 [conn2] datafileheader::init initializing /data/db/addshard40/config.0 n:0
m30000| Wed Jun 13 22:30:12 [FileAllocator] allocating new datafile /data/db/addshard40/config.1, filling with zeroes...
m30000| Wed Jun 13 22:30:12 [conn2] build index config.settings { _id: 1 }
m30000| Wed Jun 13 22:30:12 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:12 [initandlisten] connection accepted from 184.173.149.242:50759 #4 (4 connections now open)
m30000| Wed Jun 13 22:30:12 [initandlisten] connection accepted from 184.173.149.242:50760 #5 (5 connections now open)
m30000| Wed Jun 13 22:30:12 [conn5] build index config.version { _id: 1 }
m30000| Wed Jun 13 22:30:12 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:12 [conn4] build index config.chunks { _id: 1 }
m30999| Wed Jun 13 22:30:12 [Balancer] about to contact config servers and shards
m30999| Wed Jun 13 22:30:12 [mongosMain] waiting for connections on port 30999
m30000| Wed Jun 13 22:30:12 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:12 [conn4] info: creating collection config.chunks on add index
m30000| Wed Jun 13 22:30:12 [conn4] build index config.chunks { ns: 1, min: 1 }
m30000| Wed Jun 13 22:30:12 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:30:12 [websvr] admin web console waiting for connections on port 31999
m30999| Wed Jun 13 22:30:12 [Balancer] config servers and shards contacted successfully
m30000| Wed Jun 13 22:30:12 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30999| Wed Jun 13 22:30:12 [Balancer] balancer id: tp2.10gen.cc:30999 started at Jun 13 22:30:12
m30999| Wed Jun 13 22:30:12 [Balancer] created new distributed lock for balancer on tp2.10gen.cc:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:30:12 [initandlisten] connection accepted from 184.173.149.242:50761 #6 (6 connections now open)
m30000| Wed Jun 13 22:30:12 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:12 [conn5] build index config.mongos { _id: 1 }
m30000| Wed Jun 13 22:30:12 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:12 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Wed Jun 13 22:30:12 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:12 [conn4] build index config.shards { _id: 1 }
m30000| Wed Jun 13 22:30:12 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:12 [conn4] info: creating collection config.shards on add index
m30000| Wed Jun 13 22:30:12 [conn4] build index config.shards { host: 1 }
m30000| Wed Jun 13 22:30:12 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:30:12 [LockPinger] creating distributed lock ping thread for tp2.10gen.cc:30000 and process tp2.10gen.cc:30999:1339644612:1804289383 (sleeping for 30000ms)
m30000| Wed Jun 13 22:30:12 [conn4] build index config.lockpings { _id: 1 }
m30000| Wed Jun 13 22:30:12 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:12 [conn6] build index config.locks { _id: 1 }
m30000| Wed Jun 13 22:30:12 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:30:12 [conn4] build index config.lockpings { ping: 1 }
m30000| Wed Jun 13 22:30:12 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Wed Jun 13 22:30:12 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644612:1804289383' acquired, ts : 4fd95ac40a7325a56e63017d
m30999| Wed Jun 13 22:30:12 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644612:1804289383' unlocked.
m30000| Wed Jun 13 22:30:13 [FileAllocator] done allocating datafile /data/db/addshard40/config.1, size: 32MB, took 0.068 secs
m30999| Wed Jun 13 22:30:13 [mongosMain] connection accepted from 127.0.0.1:50039 #1 (1 connection now open)
ShardingTest undefined going to add shard : tp2.10gen.cc:30000
m30999| Wed Jun 13 22:30:13 [conn] couldn't find database [admin] in config db
m30000| Wed Jun 13 22:30:13 [conn4] build index config.databases { _id: 1 }
m30000| Wed Jun 13 22:30:13 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:30:13 [conn] put [admin] on: config:tp2.10gen.cc:30000
m30999| Wed Jun 13 22:30:13 [conn] going to add shard: { _id: "shard0000", host: "tp2.10gen.cc:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : tp2.10gen.cc:30001
m30001| Wed Jun 13 22:30:13 [initandlisten] connection accepted from 184.173.149.242:37742 #2 (2 connections now open)
m30999| Wed Jun 13 22:30:13 [conn] going to add shard: { _id: "shard0001", host: "tp2.10gen.cc:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31100,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "addshard4",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "addshard4"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/addshard4-0'
Wed Jun 13 22:30:13 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet addshard4 --dbpath /data/db/addshard4-0
m31100| note: noprealloc may hurt performance in many applications
m31100| Wed Jun 13 22:30:13
m31100| Wed Jun 13 22:30:13 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31100| Wed Jun 13 22:30:13
m31100| Wed Jun 13 22:30:13 [initandlisten] MongoDB starting : pid=9716 port=31100 dbpath=/data/db/addshard4-0 32-bit host=tp2.10gen.cc
m31100| Wed Jun 13 22:30:13 [initandlisten] _DEBUG build (which is slower)
m31100| Wed Jun 13 22:30:13 [initandlisten]
m31100| Wed Jun 13 22:30:13 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31100| Wed Jun 13 22:30:13 [initandlisten] ** Not recommended for production.
m31100| Wed Jun 13 22:30:13 [initandlisten]
m31100| Wed Jun 13 22:30:13 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31100| Wed Jun 13 22:30:13 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31100| Wed Jun 13 22:30:13 [initandlisten] ** with --journal, the limit is lower
m31100| Wed Jun 13 22:30:13 [initandlisten]
m31100| Wed Jun 13 22:30:13 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31100| Wed Jun 13 22:30:13 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31100| Wed Jun 13 22:30:13 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31100| Wed Jun 13 22:30:13 [initandlisten] options: { dbpath: "/data/db/addshard4-0", noprealloc: true, oplogSize: 40, port: 31100, replSet: "addshard4", rest: true, smallfiles: true }
m31100| Wed Jun 13 22:30:13 [initandlisten] waiting for connections on port 31100
m31100| Wed Jun 13 22:30:13 [websvr] admin web console waiting for connections on port 32100
m31100| Wed Jun 13 22:30:13 [initandlisten] connection accepted from 184.173.149.242:42643 #1 (1 connection now open)
m31100| Wed Jun 13 22:30:13 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31100| Wed Jun 13 22:30:13 [conn1] opening db: local
m31100| Wed Jun 13 22:30:13 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31100| Wed Jun 13 22:30:13 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31100| Wed Jun 13 22:30:13 [initandlisten] connection accepted from 127.0.0.1:50734 #2 (2 connections now open)
[ connection to localhost:31100 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31101,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "addshard4",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "addshard4"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/addshard4-1'
Wed Jun 13 22:30:13 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet addshard4 --dbpath /data/db/addshard4-1
m31101| note: noprealloc may hurt performance in many applications
m31101| Wed Jun 13 22:30:13
m31101| Wed Jun 13 22:30:13 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31101| Wed Jun 13 22:30:13
m31101| Wed Jun 13 22:30:13 [initandlisten] MongoDB starting : pid=9732 port=31101 dbpath=/data/db/addshard4-1 32-bit host=tp2.10gen.cc
m31101| Wed Jun 13 22:30:13 [initandlisten] _DEBUG build (which is slower)
m31101| Wed Jun 13 22:30:13 [initandlisten]
m31101| Wed Jun 13 22:30:13 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31101| Wed Jun 13 22:30:13 [initandlisten] ** Not recommended for production.
m31101| Wed Jun 13 22:30:13 [initandlisten]
m31101| Wed Jun 13 22:30:13 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31101| Wed Jun 13 22:30:13 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31101| Wed Jun 13 22:30:13 [initandlisten] ** with --journal, the limit is lower
m31101| Wed Jun 13 22:30:13 [initandlisten]
m31101| Wed Jun 13 22:30:13 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31101| Wed Jun 13 22:30:13 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31101| Wed Jun 13 22:30:13 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31101| Wed Jun 13 22:30:13 [initandlisten] options: { dbpath: "/data/db/addshard4-1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "addshard4", rest: true, smallfiles: true }
m31101| Wed Jun 13 22:30:13 [initandlisten] waiting for connections on port 31101
m31101| Wed Jun 13 22:30:13 [websvr] admin web console waiting for connections on port 32101
m31101| Wed Jun 13 22:30:13 [initandlisten] connection accepted from 184.173.149.242:56355 #1 (1 connection now open)
m31101| Wed Jun 13 22:30:13 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31101| Wed Jun 13 22:30:13 [conn1] opening db: local
m31101| Wed Jun 13 22:30:13 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31101| Wed Jun 13 22:30:13 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31101| Wed Jun 13 22:30:13 [initandlisten] connection accepted from 127.0.0.1:55045 #2 (2 connections now open)
[ connection to localhost:31100, connection to localhost:31101 ]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31102,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "addshard4",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "addshard4"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/addshard4-2'
Wed Jun 13 22:30:13 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --port 31102 --noprealloc --smallfiles --rest --replSet addshard4 --dbpath /data/db/addshard4-2
m31102| note: noprealloc may hurt performance in many applications
m31102| Wed Jun 13 22:30:13
m31102| Wed Jun 13 22:30:13 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31102| Wed Jun 13 22:30:13
m31102| Wed Jun 13 22:30:13 [initandlisten] MongoDB starting : pid=9748 port=31102 dbpath=/data/db/addshard4-2 32-bit host=tp2.10gen.cc
m31102| Wed Jun 13 22:30:13 [initandlisten] _DEBUG build (which is slower)
m31102| Wed Jun 13 22:30:13 [initandlisten]
m31102| Wed Jun 13 22:30:13 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31102| Wed Jun 13 22:30:13 [initandlisten] ** Not recommended for production.
m31102| Wed Jun 13 22:30:13 [initandlisten]
m31102| Wed Jun 13 22:30:13 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31102| Wed Jun 13 22:30:13 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31102| Wed Jun 13 22:30:13 [initandlisten] ** with --journal, the limit is lower
m31102| Wed Jun 13 22:30:13 [initandlisten]
m31102| Wed Jun 13 22:30:13 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31102| Wed Jun 13 22:30:13 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31102| Wed Jun 13 22:30:13 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31102| Wed Jun 13 22:30:13 [initandlisten] options: { dbpath: "/data/db/addshard4-2", noprealloc: true, oplogSize: 40, port: 31102, replSet: "addshard4", rest: true, smallfiles: true }
m31102| Wed Jun 13 22:30:13 [initandlisten] waiting for connections on port 31102
m31102| Wed Jun 13 22:30:13 [websvr] admin web console waiting for connections on port 32102
m31102| Wed Jun 13 22:30:13 [initandlisten] connection accepted from 184.173.149.242:52516 #1 (1 connection now open)
m31102| Wed Jun 13 22:30:13 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31102| Wed Jun 13 22:30:13 [conn1] opening db: local
m31102| Wed Jun 13 22:30:13 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31102| Wed Jun 13 22:30:13 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31102| Wed Jun 13 22:30:13 [initandlisten] connection accepted from 127.0.0.1:33973 #2 (2 connections now open)
[
connection to localhost:31100,
connection to localhost:31101,
connection to localhost:31102
]
{
"replSetInitiate" : {
"_id" : "addshard4",
"members" : [
{
"_id" : 0,
"host" : "tp2.10gen.cc:31100"
},
{
"_id" : 1,
"host" : "tp2.10gen.cc:31101"
},
{
"_id" : 2,
"host" : "tp2.10gen.cc:31102",
"priority" : 0
}
]
}
}
m31100| Wed Jun 13 22:30:13 [conn2] replSet replSetInitiate admin command received from client
m31100| Wed Jun 13 22:30:13 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31101| Wed Jun 13 22:30:13 [initandlisten] connection accepted from 184.173.149.242:56360 #3 (3 connections now open)
m31102| Wed Jun 13 22:30:13 [initandlisten] connection accepted from 184.173.149.242:52519 #3 (3 connections now open)
m31100| Wed Jun 13 22:30:13 [conn2] replSet replSetInitiate all members seem up
m31100| Wed Jun 13 22:30:13 [conn2] ******
m31100| Wed Jun 13 22:30:13 [conn2] creating replication oplog of size: 40MB...
m31100| Wed Jun 13 22:30:13 [FileAllocator] allocating new datafile /data/db/addshard4-0/local.ns, filling with zeroes...
m31100| Wed Jun 13 22:30:13 [FileAllocator] creating directory /data/db/addshard4-0/_tmp
m31100| Wed Jun 13 22:30:13 [FileAllocator] done allocating datafile /data/db/addshard4-0/local.ns, size: 16MB, took 0.038 secs
m31100| Wed Jun 13 22:30:13 [FileAllocator] allocating new datafile /data/db/addshard4-0/local.0, filling with zeroes...
m31100| Wed Jun 13 22:30:13 [FileAllocator] done allocating datafile /data/db/addshard4-0/local.0, size: 64MB, took 0.115 secs
m31100| Wed Jun 13 22:30:13 [conn2] datafileheader::init initializing /data/db/addshard4-0/local.0 n:0
m31100| Wed Jun 13 22:30:13 [conn2] ******
m31100| Wed Jun 13 22:30:13 [conn2] replSet info saving a newer config version to local.system.replset
m31100| Wed Jun 13 22:30:13 [conn2] replSet saveConfigLocally done
m31100| Wed Jun 13 22:30:13 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31100| Wed Jun 13 22:30:13 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "addshard4", members: [ { _id: 0.0, host: "tp2.10gen.cc:31100" }, { _id: 1.0, host: "tp2.10gen.cc:31101" }, { _id: 2.0, host: "tp2.10gen.cc:31102", priority: 0.0 } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:169725 w:72 reslen:112 170ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m30999| Wed Jun 13 22:30:23 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644612:1804289383' acquired, ts : 4fd95ace0a7325a56e63017e
m30999| Wed Jun 13 22:30:23 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644612:1804289383' unlocked.
m31100| Wed Jun 13 22:30:23 [rsStart] replSet load config ok from self
m31100| Wed Jun 13 22:30:23 [rsStart] replSet I am tp2.10gen.cc:31100
m31100| Wed Jun 13 22:30:23 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31102
m31100| Wed Jun 13 22:30:23 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31101
m31100| Wed Jun 13 22:30:23 [rsStart] replSet STARTUP2
m31100| Wed Jun 13 22:30:23 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is up
m31100| Wed Jun 13 22:30:23 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is up
m31100| Wed Jun 13 22:30:23 [rsSync] replSet SECONDARY
m31101| Wed Jun 13 22:30:23 [rsStart] trying to contact tp2.10gen.cc:31100
m31100| Wed Jun 13 22:30:23 [initandlisten] connection accepted from 184.173.149.242:42653 #3 (3 connections now open)
m31101| Wed Jun 13 22:30:23 [rsStart] replSet load config ok from tp2.10gen.cc:31100
m31101| Wed Jun 13 22:30:23 [rsStart] replSet I am tp2.10gen.cc:31101
m31101| Wed Jun 13 22:30:23 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31102
m31101| Wed Jun 13 22:30:23 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31100
m31101| Wed Jun 13 22:30:23 [rsStart] replSet got config version 1 from a remote, saving locally
m31101| Wed Jun 13 22:30:23 [rsStart] replSet info saving a newer config version to local.system.replset
m31101| Wed Jun 13 22:30:23 [FileAllocator] allocating new datafile /data/db/addshard4-1/local.ns, filling with zeroes...
m31101| Wed Jun 13 22:30:23 [FileAllocator] creating directory /data/db/addshard4-1/_tmp
m31101| Wed Jun 13 22:30:23 [FileAllocator] done allocating datafile /data/db/addshard4-1/local.ns, size: 16MB, took 0.036 secs
m31101| Wed Jun 13 22:30:23 [FileAllocator] allocating new datafile /data/db/addshard4-1/local.0, filling with zeroes...
m31101| Wed Jun 13 22:30:23 [FileAllocator] done allocating datafile /data/db/addshard4-1/local.0, size: 16MB, took 0.034 secs
m31101| Wed Jun 13 22:30:23 [rsStart] datafileheader::init initializing /data/db/addshard4-1/local.0 n:0
m31101| Wed Jun 13 22:30:23 [rsStart] replSet saveConfigLocally done
m31101| Wed Jun 13 22:30:23 [rsStart] replSet STARTUP2
m31101| Wed Jun 13 22:30:23 [rsSync] ******
m31101| Wed Jun 13 22:30:23 [rsSync] creating replication oplog of size: 40MB...
m31101| Wed Jun 13 22:30:23 [FileAllocator] allocating new datafile /data/db/addshard4-1/local.1, filling with zeroes...
m31102| Wed Jun 13 22:30:23 [rsStart] trying to contact tp2.10gen.cc:31100
m31100| Wed Jun 13 22:30:23 [initandlisten] connection accepted from 184.173.149.242:42654 #4 (4 connections now open)
m31102| Wed Jun 13 22:30:23 [rsStart] replSet load config ok from tp2.10gen.cc:31100
m31102| Wed Jun 13 22:30:23 [rsStart] replSet I am tp2.10gen.cc:31102
m31102| Wed Jun 13 22:30:23 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31101
m31102| Wed Jun 13 22:30:23 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31100
m31102| Wed Jun 13 22:30:23 [rsStart] replSet got config version 1 from a remote, saving locally
m31102| Wed Jun 13 22:30:23 [rsStart] replSet info saving a newer config version to local.system.replset
m31102| Wed Jun 13 22:30:23 [FileAllocator] allocating new datafile /data/db/addshard4-2/local.ns, filling with zeroes...
m31102| Wed Jun 13 22:30:23 [FileAllocator] creating directory /data/db/addshard4-2/_tmp
m31101| Wed Jun 13 22:30:23 [FileAllocator] done allocating datafile /data/db/addshard4-1/local.1, size: 64MB, took 0.119 secs
m31101| Wed Jun 13 22:30:23 [rsSync] datafileheader::init initializing /data/db/addshard4-1/local.1 n:1
m31101| Wed Jun 13 22:30:23 [rsSync] ******
m31101| Wed Jun 13 22:30:23 [rsSync] replSet initial sync pending
m31101| Wed Jun 13 22:30:23 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31102| Wed Jun 13 22:30:23 [FileAllocator] done allocating datafile /data/db/addshard4-2/local.ns, size: 16MB, took 0.036 secs
m31102| Wed Jun 13 22:30:23 [FileAllocator] allocating new datafile /data/db/addshard4-2/local.0, filling with zeroes...
m31102| Wed Jun 13 22:30:23 [FileAllocator] done allocating datafile /data/db/addshard4-2/local.0, size: 16MB, took 0.036 secs
m31102| Wed Jun 13 22:30:23 [rsStart] datafileheader::init initializing /data/db/addshard4-2/local.0 n:0
m31102| Wed Jun 13 22:30:23 [rsStart] replSet saveConfigLocally done
m31102| Wed Jun 13 22:30:23 [rsStart] replSet STARTUP2
m31102| Wed Jun 13 22:30:23 [rsSync] ******
m31102| Wed Jun 13 22:30:23 [rsSync] creating replication oplog of size: 40MB...
m31102| Wed Jun 13 22:30:23 [FileAllocator] allocating new datafile /data/db/addshard4-2/local.1, filling with zeroes...
m31102| Wed Jun 13 22:30:23 [FileAllocator] done allocating datafile /data/db/addshard4-2/local.1, size: 64MB, took 0.239 secs
m31102| Wed Jun 13 22:30:23 [rsSync] datafileheader::init initializing /data/db/addshard4-2/local.1 n:1
m31102| Wed Jun 13 22:30:23 [rsSync] ******
m31102| Wed Jun 13 22:30:23 [rsSync] replSet initial sync pending
m31102| Wed Jun 13 22:30:23 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31100| Wed Jun 13 22:30:25 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state STARTUP2
m31100| Wed Jun 13 22:30:25 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state STARTUP2
m31100| Wed Jun 13 22:30:25 [rsMgr] not electing self, tp2.10gen.cc:31102 would veto
m31100| Wed Jun 13 22:30:25 [rsMgr] not electing self, tp2.10gen.cc:31102 would veto
m31102| Wed Jun 13 22:30:25 [initandlisten] connection accepted from 184.173.149.242:52522 #4 (4 connections now open)
m31101| Wed Jun 13 22:30:25 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is up
m31101| Wed Jun 13 22:30:25 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state SECONDARY
m31101| Wed Jun 13 22:30:25 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is up
m31101| Wed Jun 13 22:30:25 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state STARTUP2
m31102| Wed Jun 13 22:30:25 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is up
m31102| Wed Jun 13 22:30:25 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state SECONDARY
m31101| Wed Jun 13 22:30:25 [initandlisten] connection accepted from 184.173.149.242:56365 #4 (4 connections now open)
m31102| Wed Jun 13 22:30:25 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is up
m31102| Wed Jun 13 22:30:25 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state STARTUP2
m31100| Wed Jun 13 22:30:31 [rsMgr] replSet info electSelf 0
m31102| Wed Jun 13 22:30:31 [conn3] replSet received elect msg { replSetElect: 1, set: "addshard4", who: "tp2.10gen.cc:31100", whoid: 0, cfgver: 1, round: ObjectId('4fd95ad786611e55db1ae811') }
m31102| Wed Jun 13 22:30:31 [conn3] replSet RECOVERING
m31101| Wed Jun 13 22:30:31 [conn3] replSet received elect msg { replSetElect: 1, set: "addshard4", who: "tp2.10gen.cc:31100", whoid: 0, cfgver: 1, round: ObjectId('4fd95ad786611e55db1ae811') }
m31102| Wed Jun 13 22:30:31 [conn3] replSet info voting yea for tp2.10gen.cc:31100 (0)
m31101| Wed Jun 13 22:30:31 [conn3] replSet RECOVERING
m31101| Wed Jun 13 22:30:31 [conn3] replSet info voting yea for tp2.10gen.cc:31100 (0)
m31100| Wed Jun 13 22:30:31 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95ad786611e55db1ae811'), ok: 1.0 }
m31100| Wed Jun 13 22:30:31 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95ad786611e55db1ae811'), ok: 1.0 }
m31100| Wed Jun 13 22:30:31 [rsMgr] info PageFaultRetryableSection will not yield, already locked upon reaching
m31100| Wed Jun 13 22:30:31 [rsMgr] replSet PRIMARY
m31101| Wed Jun 13 22:30:31 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state RECOVERING
m31101| Wed Jun 13 22:30:31 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state PRIMARY
m31102| Wed Jun 13 22:30:31 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state PRIMARY
m31102| Wed Jun 13 22:30:31 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state RECOVERING
ReplSetTest Timestamp(1339644613000, 11)
ReplSetTest waiting for connection to localhost:31101 to have an oplog built.
ReplSetTest waiting for connection to localhost:31102 to have an oplog built.
m30999| Wed Jun 13 22:30:33 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644612:1804289383' acquired, ts : 4fd95ad90a7325a56e63017f
m30999| Wed Jun 13 22:30:33 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644612:1804289383' unlocked.
m31100| Wed Jun 13 22:30:33 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state RECOVERING
m31100| Wed Jun 13 22:30:33 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state RECOVERING
ReplSetTest waiting for connection to localhost:31101 to have an oplog built.
ReplSetTest waiting for connection to localhost:31102 to have an oplog built.
ReplSetTest waiting for connection to localhost:31101 to have an oplog built.
ReplSetTest waiting for connection to localhost:31102 to have an oplog built.
m31101| Wed Jun 13 22:30:37 [conn3] end connection 184.173.149.242:56360 (3 connections now open)
m31101| Wed Jun 13 22:30:37 [initandlisten] connection accepted from 184.173.149.242:56366 #5 (4 connections now open)
ReplSetTest waiting for connection to localhost:31101 to have an oplog built.
ReplSetTest waiting for connection to localhost:31102 to have an oplog built.
m31100| Wed Jun 13 22:30:39 [conn3] end connection 184.173.149.242:42653 (3 connections now open)
m31100| Wed Jun 13 22:30:39 [initandlisten] connection accepted from 184.173.149.242:42658 #5 (4 connections now open)
m31100| Wed Jun 13 22:30:39 [conn4] end connection 184.173.149.242:42654 (3 connections now open)
m31100| Wed Jun 13 22:30:39 [initandlisten] connection accepted from 184.173.149.242:42659 #6 (4 connections now open)
m31101| Wed Jun 13 22:30:39 [rsSync] replSet initial sync pending
m31101| Wed Jun 13 22:30:39 [rsSync] replSet syncing to: tp2.10gen.cc:31100
m31100| Wed Jun 13 22:30:39 [initandlisten] connection accepted from 184.173.149.242:42660 #7 (5 connections now open)
m31101| Wed Jun 13 22:30:39 [rsSync] build index local.me { _id: 1 }
m31101| Wed Jun 13 22:30:39 [rsSync] build index done. scanned 0 total records. 0.009 secs
m31101| Wed Jun 13 22:30:39 [rsSync] replSet initial sync drop all databases
m31101| Wed Jun 13 22:30:39 [rsSync] dropAllDatabasesExceptLocal 1
m31101| Wed Jun 13 22:30:39 [rsSync] replSet initial sync clone all databases
m31101| Wed Jun 13 22:30:39 [rsSync] replSet initial sync data copy, starting syncup
m31101| Wed Jun 13 22:30:39 [rsSync] replSet initial sync building indexes
m31101| Wed Jun 13 22:30:39 [rsSync] replSet initial sync query minValid
m31101| Wed Jun 13 22:30:39 [rsSync] replSet initial sync finishing up
m31101| Wed Jun 13 22:30:39 [rsSync] replSet set minValid=4fd95ac5:b
m31101| Wed Jun 13 22:30:39 [rsSync] build index local.replset.minvalid { _id: 1 }
m31101| Wed Jun 13 22:30:39 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Wed Jun 13 22:30:39 [rsSync] replSet initial sync done
m31100| Wed Jun 13 22:30:39 [conn7] end connection 184.173.149.242:42660 (4 connections now open)
{
"ts" : Timestamp(1339644613000, 11),
"h" : NumberLong(0),
"op" : "n",
"ns" : "",
"o" : {
"msg" : "initiating set"
}
}
ReplSetTest await TS for connection to localhost:31101 is 1339644613000:11 and latest is 1339644613000:11
ReplSetTest await oplog size for connection to localhost:31101 is 1
ReplSetTest waiting for connection to localhost:31102 to have an oplog built.
m31102| Wed Jun 13 22:30:39 [rsSync] replSet initial sync pending
m31102| Wed Jun 13 22:30:39 [rsSync] replSet syncing to: tp2.10gen.cc:31100
m31100| Wed Jun 13 22:30:39 [initandlisten] connection accepted from 184.173.149.242:42661 #8 (5 connections now open)
m31102| Wed Jun 13 22:30:39 [rsSync] build index local.me { _id: 1 }
m31102| Wed Jun 13 22:30:39 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Wed Jun 13 22:30:39 [rsSync] replSet initial sync drop all databases
m31102| Wed Jun 13 22:30:39 [rsSync] dropAllDatabasesExceptLocal 1
m31102| Wed Jun 13 22:30:39 [rsSync] replSet initial sync clone all databases
m31102| Wed Jun 13 22:30:39 [rsSync] replSet initial sync data copy, starting syncup
m31102| Wed Jun 13 22:30:39 [rsSync] replSet initial sync building indexes
m31102| Wed Jun 13 22:30:39 [rsSync] replSet initial sync query minValid
m31102| Wed Jun 13 22:30:39 [rsSync] replSet initial sync finishing up
m31102| Wed Jun 13 22:30:39 [rsSync] replSet set minValid=4fd95ac5:b
m31102| Wed Jun 13 22:30:39 [rsSync] build index local.replset.minvalid { _id: 1 }
m31102| Wed Jun 13 22:30:39 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Wed Jun 13 22:30:39 [rsSync] replSet initial sync done
m31100| Wed Jun 13 22:30:39 [conn8] end connection 184.173.149.242:42661 (4 connections now open)
m31101| Wed Jun 13 22:30:40 [rsBackgroundSync] replSet syncing to: tp2.10gen.cc:31100
m31100| Wed Jun 13 22:30:40 [initandlisten] connection accepted from 184.173.149.242:42662 #9 (5 connections now open)
m31101| Wed Jun 13 22:30:40 [rsBackgroundSync] replSet remoteOldestOp: Jun 13 22:30:13 4fd95ac5:b
m31101| Wed Jun 13 22:30:40 [rsBackgroundSync] replSet lastOpTimeFetched: Jun 13 22:30:13 4fd95ac5:b
m31100| Wed Jun 13 22:30:40 [conn9] query has no more but tailable, cursorid: 6601708515634117873
m31101| Wed Jun 13 22:30:40 [rsSyncNotifier] replset setting oplog notifier to tp2.10gen.cc:31100
m31100| Wed Jun 13 22:30:40 [initandlisten] connection accepted from 184.173.149.242:42663 #10 (6 connections now open)
m31100| Wed Jun 13 22:30:40 [conn10] query has no more but tailable, cursorid: 3362803626803920257
m31101| Wed Jun 13 22:30:40 [rsSync] replSet SECONDARY
m31102| Wed Jun 13 22:30:40 [rsBackgroundSync] replSet syncing to: tp2.10gen.cc:31100
m31100| Wed Jun 13 22:30:40 [initandlisten] connection accepted from 184.173.149.242:42664 #11 (7 connections now open)
m31102| Wed Jun 13 22:30:40 [rsBackgroundSync] replSet remoteOldestOp: Jun 13 22:30:13 4fd95ac5:b
m31102| Wed Jun 13 22:30:40 [rsBackgroundSync] replSet lastOpTimeFetched: Jun 13 22:30:13 4fd95ac5:b
m31100| Wed Jun 13 22:30:40 [conn11] query has no more but tailable, cursorid: 4670251792658529729
m31102| Wed Jun 13 22:30:40 [rsSyncNotifier] replset setting oplog notifier to tp2.10gen.cc:31100
m31100| Wed Jun 13 22:30:40 [initandlisten] connection accepted from 184.173.149.242:42665 #12 (8 connections now open)
m31100| Wed Jun 13 22:30:40 [conn12] query has no more but tailable, cursorid: 8344561705540058817
m31102| Wed Jun 13 22:30:40 [rsSync] replSet SECONDARY
m31100| Wed Jun 13 22:30:41 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state SECONDARY
m31100| Wed Jun 13 22:30:41 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state SECONDARY
m31101| Wed Jun 13 22:30:41 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state SECONDARY
m31102| Wed Jun 13 22:30:41 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state SECONDARY
m31100| Wed Jun 13 22:30:41 [slaveTracking] build index local.slaves { _id: 1 }
m31100| Wed Jun 13 22:30:41 [slaveTracking] build index done. scanned 0 total records. 0 secs
{
"ts" : Timestamp(1339644613000, 11),
"h" : NumberLong(0),
"op" : "n",
"ns" : "",
"o" : {
"msg" : "initiating set"
}
}
ReplSetTest await TS for connection to localhost:31101 is 1339644613000:11 and latest is 1339644613000:11
ReplSetTest await oplog size for connection to localhost:31101 is 1
{
"ts" : Timestamp(1339644613000, 11),
"h" : NumberLong(0),
"op" : "n",
"ns" : "",
"o" : {
"msg" : "initiating set"
}
}
ReplSetTest await TS for connection to localhost:31102 is 1339644613000:11 and latest is 1339644613000:11
ReplSetTest await oplog size for connection to localhost:31102 is 1
ReplSetTest await synced=true
adding shard addshard4/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m30999| Wed Jun 13 22:30:41 [conn] starting new replica set monitor for replica set addshard4 with seed of foobar:27017
m30999| Wed Jun 13 22:30:41 [conn] getaddrinfo("foobar") failed: Name or service not known
m30999| Wed Jun 13 22:30:41 [conn] error connecting to seed foobar:27017 :: caused by :: 15928 couldn't connect to server foobar:27017
m30999| Wed Jun 13 22:30:43 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644612:1804289383' acquired, ts : 4fd95ae30a7325a56e630180
m30999| Wed Jun 13 22:30:43 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644612:1804289383' unlocked.
m30999| Wed Jun 13 22:30:43 [conn] warning: No primary detected for set addshard4
m30999| Wed Jun 13 22:30:43 [conn] All nodes for set addshard4 are down. This has happened for 1 checks in a row. Polling will stop after 29 more failed checks
m30999| Wed Jun 13 22:30:43 [conn] replica set monitor for replica set addshard4 started, address is addshard4/
m30999| Wed Jun 13 22:30:43 [ReplicaSetMonitorWatcher] starting
m30999| Wed Jun 13 22:30:45 [conn] warning: No primary detected for set addshard4
m30999| Wed Jun 13 22:30:45 [conn] deleting replica set monitor for: addshard4/
m30999| Wed Jun 13 22:30:45 [conn] addshard request { addshard: "addshard4/foobar" } failed: couldn't connect to new shard socket exception
m30999| Wed Jun 13 22:30:45 [conn] starting new replica set monitor for replica set addshard4 with seed of tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m30999| Wed Jun 13 22:30:45 [conn] successfully connected to seed tp2.10gen.cc:31100 for replica set addshard4
m31100| Wed Jun 13 22:30:45 [initandlisten] connection accepted from 184.173.149.242:42666 #13 (9 connections now open)
m30999| Wed Jun 13 22:30:45 [conn] changing hosts to { 0: "tp2.10gen.cc:31100", 1: "tp2.10gen.cc:31101", 2: "tp2.10gen.cc:31102" } from addshard4/
m30999| Wed Jun 13 22:30:45 [conn] trying to add new host tp2.10gen.cc:31100 to replica set addshard4
m31100| Wed Jun 13 22:30:45 [initandlisten] connection accepted from 184.173.149.242:42667 #14 (10 connections now open)
m30999| Wed Jun 13 22:30:45 [conn] successfully connected to new host tp2.10gen.cc:31100 in replica set addshard4
m30999| Wed Jun 13 22:30:45 [conn] trying to add new host tp2.10gen.cc:31101 to replica set addshard4
m31101| Wed Jun 13 22:30:45 [initandlisten] connection accepted from 184.173.149.242:56377 #6 (5 connections now open)
m30999| Wed Jun 13 22:30:45 [conn] successfully connected to new host tp2.10gen.cc:31101 in replica set addshard4
m30999| Wed Jun 13 22:30:45 [conn] trying to add new host tp2.10gen.cc:31102 to replica set addshard4
m31102| Wed Jun 13 22:30:45 [initandlisten] connection accepted from 184.173.149.242:52536 #5 (5 connections now open)
m30999| Wed Jun 13 22:30:45 [conn] successfully connected to new host tp2.10gen.cc:31102 in replica set addshard4
m31100| Wed Jun 13 22:30:45 [initandlisten] connection accepted from 184.173.149.242:42670 #15 (11 connections now open)
m31100| Wed Jun 13 22:30:45 [conn13] end connection 184.173.149.242:42666 (10 connections now open)
m30999| Wed Jun 13 22:30:45 [conn] Primary for replica set addshard4 changed to tp2.10gen.cc:31100
m31101| Wed Jun 13 22:30:45 [initandlisten] connection accepted from 184.173.149.242:56380 #7 (6 connections now open)
m31102| Wed Jun 13 22:30:45 [initandlisten] connection accepted from 184.173.149.242:52539 #6 (6 connections now open)
m30999| Wed Jun 13 22:30:45 [conn] replica set monitor for replica set addshard4 started, address is addshard4/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31100| Wed Jun 13 22:30:45 [initandlisten] connection accepted from 184.173.149.242:42673 #16 (11 connections now open)
m30999| Wed Jun 13 22:30:45 [conn] going to add shard: { _id: "addshard4", host: "addshard4/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102" }
true
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31200, 31201, 31202 ] 31200 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31200,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "addshard42",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "addshard42"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/addshard42-0'
Wed Jun 13 22:30:46 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --port 31200 --noprealloc --smallfiles --rest --replSet addshard42 --dbpath /data/db/addshard42-0
m31200| note: noprealloc may hurt performance in many applications
m31200| Wed Jun 13 22:30:46
m31200| Wed Jun 13 22:30:46 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31200| Wed Jun 13 22:30:46
m31200| Wed Jun 13 22:30:46 [initandlisten] MongoDB starting : pid=9841 port=31200 dbpath=/data/db/addshard42-0 32-bit host=tp2.10gen.cc
m31200| Wed Jun 13 22:30:46 [initandlisten] _DEBUG build (which is slower)
m31200| Wed Jun 13 22:30:46 [initandlisten]
m31200| Wed Jun 13 22:30:46 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31200| Wed Jun 13 22:30:46 [initandlisten] ** Not recommended for production.
m31200| Wed Jun 13 22:30:46 [initandlisten]
m31200| Wed Jun 13 22:30:46 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31200| Wed Jun 13 22:30:46 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31200| Wed Jun 13 22:30:46 [initandlisten] ** with --journal, the limit is lower
m31200| Wed Jun 13 22:30:46 [initandlisten]
m31200| Wed Jun 13 22:30:46 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31200| Wed Jun 13 22:30:46 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31200| Wed Jun 13 22:30:46 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31200| Wed Jun 13 22:30:46 [initandlisten] options: { dbpath: "/data/db/addshard42-0", noprealloc: true, oplogSize: 40, port: 31200, replSet: "addshard42", rest: true, smallfiles: true }
m31200| Wed Jun 13 22:30:46 [initandlisten] waiting for connections on port 31200
m31200| Wed Jun 13 22:30:46 [websvr] admin web console waiting for connections on port 32200
m31200| Wed Jun 13 22:30:46 [initandlisten] connection accepted from 184.173.149.242:40967 #1 (1 connection now open)
m31200| Wed Jun 13 22:30:46 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31200| Wed Jun 13 22:30:46 [conn1] opening db: local
m31200| Wed Jun 13 22:30:46 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31200| Wed Jun 13 22:30:46 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31200| Wed Jun 13 22:30:46 [initandlisten] connection accepted from 127.0.0.1:55959 #2 (2 connections now open)
[ connection to localhost:31200 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31200, 31201, 31202 ] 31201 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31201,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "addshard42",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "addshard42"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/addshard42-1'
Wed Jun 13 22:30:46 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --port 31201 --noprealloc --smallfiles --rest --replSet addshard42 --dbpath /data/db/addshard42-1
m31201| note: noprealloc may hurt performance in many applications
m31201| Wed Jun 13 22:30:46
m31201| Wed Jun 13 22:30:46 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31201| Wed Jun 13 22:30:46
m31201| Wed Jun 13 22:30:46 [initandlisten] MongoDB starting : pid=9857 port=31201 dbpath=/data/db/addshard42-1 32-bit host=tp2.10gen.cc
m31201| Wed Jun 13 22:30:46 [initandlisten] _DEBUG build (which is slower)
m31201| Wed Jun 13 22:30:46 [initandlisten]
m31201| Wed Jun 13 22:30:46 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31201| Wed Jun 13 22:30:46 [initandlisten] ** Not recommended for production.
m31201| Wed Jun 13 22:30:46 [initandlisten]
m31201| Wed Jun 13 22:30:46 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31201| Wed Jun 13 22:30:46 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31201| Wed Jun 13 22:30:46 [initandlisten] ** with --journal, the limit is lower
m31201| Wed Jun 13 22:30:46 [initandlisten]
m31201| Wed Jun 13 22:30:46 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31201| Wed Jun 13 22:30:46 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31201| Wed Jun 13 22:30:46 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31201| Wed Jun 13 22:30:46 [initandlisten] options: { dbpath: "/data/db/addshard42-1", noprealloc: true, oplogSize: 40, port: 31201, replSet: "addshard42", rest: true, smallfiles: true }
m31201| Wed Jun 13 22:30:46 [initandlisten] waiting for connections on port 31201
m31201| Wed Jun 13 22:30:46 [websvr] admin web console waiting for connections on port 32201
m31201| Wed Jun 13 22:30:46 [initandlisten] connection accepted from 184.173.149.242:59291 #1 (1 connection now open)
m31201| Wed Jun 13 22:30:46 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31201| Wed Jun 13 22:30:46 [conn1] opening db: local
m31201| Wed Jun 13 22:30:46 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31201| Wed Jun 13 22:30:46 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31201| Wed Jun 13 22:30:46 [initandlisten] connection accepted from 127.0.0.1:51089 #2 (2 connections now open)
[ connection to localhost:31200, connection to localhost:31201 ]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31200, 31201, 31202 ] 31202 number
{
"useHostName" : undefined,
"oplogSize" : 40,
"keyFile" : undefined,
"port" : 31202,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "addshard42",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "addshard42"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/addshard42-2'
Wed Jun 13 22:30:46 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --port 31202 --noprealloc --smallfiles --rest --replSet addshard42 --dbpath /data/db/addshard42-2
m31202| note: noprealloc may hurt performance in many applications
m31202| Wed Jun 13 22:30:46
m31202| Wed Jun 13 22:30:46 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31202| Wed Jun 13 22:30:46
m31202| Wed Jun 13 22:30:46 [initandlisten] MongoDB starting : pid=9873 port=31202 dbpath=/data/db/addshard42-2 32-bit host=tp2.10gen.cc
m31202| Wed Jun 13 22:30:46 [initandlisten] _DEBUG build (which is slower)
m31202| Wed Jun 13 22:30:46 [initandlisten]
m31202| Wed Jun 13 22:30:46 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31202| Wed Jun 13 22:30:46 [initandlisten] ** Not recommended for production.
m31202| Wed Jun 13 22:30:46 [initandlisten]
m31202| Wed Jun 13 22:30:46 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31202| Wed Jun 13 22:30:46 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31202| Wed Jun 13 22:30:46 [initandlisten] ** with --journal, the limit is lower
m31202| Wed Jun 13 22:30:46 [initandlisten]
m31202| Wed Jun 13 22:30:46 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31202| Wed Jun 13 22:30:46 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31202| Wed Jun 13 22:30:46 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31202| Wed Jun 13 22:30:46 [initandlisten] options: { dbpath: "/data/db/addshard42-2", noprealloc: true, oplogSize: 40, port: 31202, replSet: "addshard42", rest: true, smallfiles: true }
m31202| Wed Jun 13 22:30:46 [initandlisten] waiting for connections on port 31202
m31202| Wed Jun 13 22:30:46 [websvr] admin web console waiting for connections on port 32202
m31202| Wed Jun 13 22:30:46 [initandlisten] connection accepted from 184.173.149.242:42737 #1 (1 connection now open)
m31202| Wed Jun 13 22:30:46 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31202| Wed Jun 13 22:30:46 [conn1] opening db: local
m31202| Wed Jun 13 22:30:46 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31202| Wed Jun 13 22:30:46 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31202| Wed Jun 13 22:30:46 [initandlisten] connection accepted from 127.0.0.1:55129 #2 (2 connections now open)
[
connection to localhost:31200,
connection to localhost:31201,
connection to localhost:31202
]
{
"replSetInitiate" : {
"_id" : "addshard42",
"members" : [
{
"_id" : 0,
"host" : "tp2.10gen.cc:31200"
},
{
"_id" : 1,
"host" : "tp2.10gen.cc:31201"
},
{
"_id" : 2,
"host" : "tp2.10gen.cc:31202",
"arbiterOnly" : true
}
]
}
}
m31200| Wed Jun 13 22:30:46 [conn2] replSet replSetInitiate admin command received from client
m31200| Wed Jun 13 22:30:46 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31201| Wed Jun 13 22:30:46 [initandlisten] connection accepted from 184.173.149.242:59296 #3 (3 connections now open)
m31202| Wed Jun 13 22:30:46 [initandlisten] connection accepted from 184.173.149.242:42740 #3 (3 connections now open)
m31200| Wed Jun 13 22:30:46 [conn2] replSet replSetInitiate all members seem up
m31200| Wed Jun 13 22:30:46 [conn2] ******
m31200| Wed Jun 13 22:30:46 [conn2] creating replication oplog of size: 40MB...
m31200| Wed Jun 13 22:30:46 [FileAllocator] allocating new datafile /data/db/addshard42-0/local.ns, filling with zeroes...
m31200| Wed Jun 13 22:30:46 [FileAllocator] creating directory /data/db/addshard42-0/_tmp
m31200| Wed Jun 13 22:30:46 [FileAllocator] done allocating datafile /data/db/addshard42-0/local.ns, size: 16MB, took 0.037 secs
m31200| Wed Jun 13 22:30:46 [FileAllocator] allocating new datafile /data/db/addshard42-0/local.0, filling with zeroes...
m31200| Wed Jun 13 22:30:46 [FileAllocator] done allocating datafile /data/db/addshard42-0/local.0, size: 64MB, took 0.118 secs
m31200| Wed Jun 13 22:30:46 [conn2] datafileheader::init initializing /data/db/addshard42-0/local.0 n:0
m31200| Wed Jun 13 22:30:46 [conn2] ******
m31200| Wed Jun 13 22:30:46 [conn2] replSet info saving a newer config version to local.system.replset
m31200| Wed Jun 13 22:30:46 [conn2] replSet saveConfigLocally done
m31200| Wed Jun 13 22:30:46 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31200| Wed Jun 13 22:30:46 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "addshard42", members: [ { _id: 0.0, host: "tp2.10gen.cc:31200" }, { _id: 1.0, host: "tp2.10gen.cc:31201" }, { _id: 2.0, host: "tp2.10gen.cc:31202", arbiterOnly: true } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:175454 w:70 reslen:112 176ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m31102| Wed Jun 13 22:30:51 [conn3] end connection 184.173.149.242:52519 (5 connections now open)
m31102| Wed Jun 13 22:30:51 [initandlisten] connection accepted from 184.173.149.242:52552 #7 (6 connections now open)
m30999| Wed Jun 13 22:30:53 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644612:1804289383' acquired, ts : 4fd95aed0a7325a56e630181
m30999| Wed Jun 13 22:30:53 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644612:1804289383' unlocked.
m31102| Wed Jun 13 22:30:53 [conn4] end connection 184.173.149.242:52522 (5 connections now open)
m31102| Wed Jun 13 22:30:53 [initandlisten] connection accepted from 184.173.149.242:52553 #8 (6 connections now open)
m31101| Wed Jun 13 22:30:53 [conn4] end connection 184.173.149.242:56365 (5 connections now open)
m31101| Wed Jun 13 22:30:53 [initandlisten] connection accepted from 184.173.149.242:56396 #8 (6 connections now open)
m31200| Wed Jun 13 22:30:56 [rsStart] replSet load config ok from self
m31200| Wed Jun 13 22:30:56 [rsStart] replSet I am tp2.10gen.cc:31200
m31200| Wed Jun 13 22:30:56 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31202
m31200| Wed Jun 13 22:30:56 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31201
m31200| Wed Jun 13 22:30:56 [rsStart] replSet STARTUP2
m31200| Wed Jun 13 22:30:56 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is up
m31200| Wed Jun 13 22:30:56 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is up
m31200| Wed Jun 13 22:30:56 [rsSync] replSet SECONDARY
m31201| Wed Jun 13 22:30:56 [rsStart] trying to contact tp2.10gen.cc:31200
m31200| Wed Jun 13 22:30:56 [initandlisten] connection accepted from 184.173.149.242:40980 #3 (3 connections now open)
m31201| Wed Jun 13 22:30:56 [rsStart] replSet load config ok from tp2.10gen.cc:31200
m31201| Wed Jun 13 22:30:56 [rsStart] replSet I am tp2.10gen.cc:31201
m31201| Wed Jun 13 22:30:56 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31202
m31201| Wed Jun 13 22:30:56 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31200
m31201| Wed Jun 13 22:30:56 [rsStart] replSet got config version 1 from a remote, saving locally
m31201| Wed Jun 13 22:30:56 [rsStart] replSet info saving a newer config version to local.system.replset
m31201| Wed Jun 13 22:30:56 [FileAllocator] allocating new datafile /data/db/addshard42-1/local.ns, filling with zeroes...
m31201| Wed Jun 13 22:30:56 [FileAllocator] creating directory /data/db/addshard42-1/_tmp
m31201| Wed Jun 13 22:30:56 [FileAllocator] done allocating datafile /data/db/addshard42-1/local.ns, size: 16MB, took 0.036 secs
m31201| Wed Jun 13 22:30:56 [FileAllocator] allocating new datafile /data/db/addshard42-1/local.0, filling with zeroes...
m31201| Wed Jun 13 22:30:56 [FileAllocator] done allocating datafile /data/db/addshard42-1/local.0, size: 16MB, took 0.035 secs
m31201| Wed Jun 13 22:30:56 [rsStart] datafileheader::init initializing /data/db/addshard42-1/local.0 n:0
m31201| Wed Jun 13 22:30:56 [rsStart] replSet saveConfigLocally done
m31201| Wed Jun 13 22:30:56 [rsStart] replSet STARTUP2
m31201| Wed Jun 13 22:30:56 [rsSync] ******
m31201| Wed Jun 13 22:30:56 [rsSync] creating replication oplog of size: 40MB...
m31201| Wed Jun 13 22:30:56 [FileAllocator] allocating new datafile /data/db/addshard42-1/local.1, filling with zeroes...
m31201| Wed Jun 13 22:30:56 [FileAllocator] done allocating datafile /data/db/addshard42-1/local.1, size: 64MB, took 0.116 secs
m31201| Wed Jun 13 22:30:56 [rsSync] datafileheader::init initializing /data/db/addshard42-1/local.1 n:1
m31202| Wed Jun 13 22:30:56 [rsStart] trying to contact tp2.10gen.cc:31200
m31200| Wed Jun 13 22:30:56 [initandlisten] connection accepted from 184.173.149.242:40981 #4 (4 connections now open)
m31202| Wed Jun 13 22:30:56 [rsStart] replSet load config ok from tp2.10gen.cc:31200
m31202| Wed Jun 13 22:30:56 [rsStart] replSet I am tp2.10gen.cc:31202
m31202| Wed Jun 13 22:30:56 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31201
m31202| Wed Jun 13 22:30:56 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31200
m31202| Wed Jun 13 22:30:56 [rsStart] replSet got config version 1 from a remote, saving locally
m31202| Wed Jun 13 22:30:56 [rsStart] replSet info saving a newer config version to local.system.replset
m31202| Wed Jun 13 22:30:56 [FileAllocator] allocating new datafile /data/db/addshard42-2/local.ns, filling with zeroes...
m31202| Wed Jun 13 22:30:56 [FileAllocator] creating directory /data/db/addshard42-2/_tmp
m31201| Wed Jun 13 22:30:56 [rsSync] ******
m31201| Wed Jun 13 22:30:56 [rsSync] replSet initial sync pending
m31201| Wed Jun 13 22:30:56 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31202| Wed Jun 13 22:30:56 [FileAllocator] done allocating datafile /data/db/addshard42-2/local.ns, size: 16MB, took 0.037 secs
m31202| Wed Jun 13 22:30:56 [FileAllocator] allocating new datafile /data/db/addshard42-2/local.0, filling with zeroes...
m31202| Wed Jun 13 22:30:56 [FileAllocator] done allocating datafile /data/db/addshard42-2/local.0, size: 16MB, took 0.035 secs
m31202| Wed Jun 13 22:30:56 [rsStart] datafileheader::init initializing /data/db/addshard42-2/local.0 n:0
m31202| Wed Jun 13 22:30:56 [rsStart] replSet saveConfigLocally done
m31202| Wed Jun 13 22:30:56 [rsStart] replSet STARTUP2
m31200| Wed Jun 13 22:30:58 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state STARTUP2
m31200| Wed Jun 13 22:30:58 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state STARTUP2
m31200| Wed Jun 13 22:30:58 [rsMgr] not electing self, tp2.10gen.cc:31202 would veto
m31200| Wed Jun 13 22:30:58 [rsMgr] not electing self, tp2.10gen.cc:31202 would veto
m31201| Wed Jun 13 22:30:58 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is up
m31201| Wed Jun 13 22:30:58 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state SECONDARY
m31202| Wed Jun 13 22:30:58 [initandlisten] connection accepted from 184.173.149.242:42746 #4 (4 connections now open)
m31201| Wed Jun 13 22:30:58 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is up
m31201| Wed Jun 13 22:30:58 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state STARTUP2
m31202| Wed Jun 13 22:30:58 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is up
m31202| Wed Jun 13 22:30:58 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state SECONDARY
m31201| Wed Jun 13 22:30:58 [initandlisten] connection accepted from 184.173.149.242:59304 #4 (4 connections now open)
m31202| Wed Jun 13 22:30:58 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is up
m31202| Wed Jun 13 22:30:58 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state STARTUP2
m30999| Wed Jun 13 22:31:03 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644612:1804289383' acquired, ts : 4fd95af70a7325a56e630182
m30999| Wed Jun 13 22:31:03 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644612:1804289383' unlocked.
m31200| Wed Jun 13 22:31:04 [rsMgr] replSet info electSelf 0
m31202| Wed Jun 13 22:31:04 [conn3] replSet received elect msg { replSetElect: 1, set: "addshard42", who: "tp2.10gen.cc:31200", whoid: 0, cfgver: 1, round: ObjectId('4fd95af8dcb9c4536f64bddc') }
m31202| Wed Jun 13 22:31:04 [conn3] replSet RECOVERING
m31201| Wed Jun 13 22:31:04 [conn3] replSet received elect msg { replSetElect: 1, set: "addshard42", who: "tp2.10gen.cc:31200", whoid: 0, cfgver: 1, round: ObjectId('4fd95af8dcb9c4536f64bddc') }
m31202| Wed Jun 13 22:31:04 [conn3] replSet info voting yea for tp2.10gen.cc:31200 (0)
m31201| Wed Jun 13 22:31:04 [conn3] replSet RECOVERING
m31201| Wed Jun 13 22:31:04 [conn3] replSet info voting yea for tp2.10gen.cc:31200 (0)
m31200| Wed Jun 13 22:31:04 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95af8dcb9c4536f64bddc'), ok: 1.0 }
m31200| Wed Jun 13 22:31:04 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95af8dcb9c4536f64bddc'), ok: 1.0 }
m31200| Wed Jun 13 22:31:04 [rsMgr] info PageFaultRetryableSection will not yield, already locked upon reaching
m31200| Wed Jun 13 22:31:04 [rsMgr] replSet PRIMARY
m31201| Wed Jun 13 22:31:04 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state PRIMARY
m31201| Wed Jun 13 22:31:04 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state RECOVERING
m31202| Wed Jun 13 22:31:04 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state PRIMARY
m31202| Wed Jun 13 22:31:04 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state RECOVERING
ReplSetTest Timestamp(1339644646000, 11)
ReplSetTest waiting for connection to localhost:31201 to have an oplog built.
m31200| Wed Jun 13 22:31:06 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state ARBITER
m31200| Wed Jun 13 22:31:06 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state RECOVERING
m31201| Wed Jun 13 22:31:06 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state ARBITER
ReplSetTest waiting for connection to localhost:31201 to have an oplog built.
m31101| Wed Jun 13 22:31:07 [conn5] end connection 184.173.149.242:56366 (5 connections now open)
m31101| Wed Jun 13 22:31:07 [initandlisten] connection accepted from 184.173.149.242:56401 #9 (6 connections now open)
ReplSetTest waiting for connection to localhost:31201 to have an oplog built.
m31100| Wed Jun 13 22:31:09 [conn5] end connection 184.173.149.242:42658 (10 connections now open)
m31100| Wed Jun 13 22:31:09 [initandlisten] connection accepted from 184.173.149.242:42693 #17 (11 connections now open)
m31100| Wed Jun 13 22:31:09 [conn6] end connection 184.173.149.242:42659 (10 connections now open)
m31100| Wed Jun 13 22:31:09 [initandlisten] connection accepted from 184.173.149.242:42694 #18 (11 connections now open)
m31201| Wed Jun 13 22:31:10 [conn3] end connection 184.173.149.242:59296 (3 connections now open)
m31201| Wed Jun 13 22:31:10 [initandlisten] connection accepted from 184.173.149.242:59308 #5 (4 connections now open)
ReplSetTest waiting for connection to localhost:31201 to have an oplog built.
m31200| Wed Jun 13 22:31:12 [conn3] end connection 184.173.149.242:40980 (3 connections now open)
m31200| Wed Jun 13 22:31:12 [initandlisten] connection accepted from 184.173.149.242:40988 #5 (4 connections now open)
m31200| Wed Jun 13 22:31:12 [conn4] end connection 184.173.149.242:40981 (3 connections now open)
m31200| Wed Jun 13 22:31:12 [initandlisten] connection accepted from 184.173.149.242:40989 #6 (4 connections now open)
m31201| Wed Jun 13 22:31:12 [rsSync] replSet initial sync pending
m31201| Wed Jun 13 22:31:12 [rsSync] replSet syncing to: tp2.10gen.cc:31200
m31200| Wed Jun 13 22:31:12 [initandlisten] connection accepted from 184.173.149.242:40990 #7 (5 connections now open)
m31201| Wed Jun 13 22:31:12 [rsSync] build index local.me { _id: 1 }
m31201| Wed Jun 13 22:31:12 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Wed Jun 13 22:31:12 [rsSync] replSet initial sync drop all databases
m31201| Wed Jun 13 22:31:12 [rsSync] dropAllDatabasesExceptLocal 1
m31201| Wed Jun 13 22:31:12 [rsSync] replSet initial sync clone all databases
m31201| Wed Jun 13 22:31:12 [rsSync] replSet initial sync data copy, starting syncup
m31201| Wed Jun 13 22:31:12 [rsSync] replSet initial sync building indexes
m31201| Wed Jun 13 22:31:12 [rsSync] replSet initial sync query minValid
m31201| Wed Jun 13 22:31:12 [rsSync] replSet initial sync finishing up
m30000| Wed Jun 13 22:31:12 [clientcursormon] mem (MB) res:36 virt:159 mapped:32
m31201| Wed Jun 13 22:31:12 [rsSync] replSet set minValid=4fd95ae6:b
m31201| Wed Jun 13 22:31:12 [rsSync] build index local.replset.minvalid { _id: 1 }
m31201| Wed Jun 13 22:31:12 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Wed Jun 13 22:31:12 [rsSync] replSet initial sync done
m31200| Wed Jun 13 22:31:12 [conn7] end connection 184.173.149.242:40990 (4 connections now open)
m30001| Wed Jun 13 22:31:12 [clientcursormon] mem (MB) res:19 virt:121 mapped:0
{
"ts" : Timestamp(1339644646000, 11),
"h" : NumberLong(0),
"op" : "n",
"ns" : "",
"o" : {
"msg" : "initiating set"
}
}
ReplSetTest await TS for connection to localhost:31201 is 1339644646000:11 and latest is 1339644646000:11
ReplSetTest await oplog size for connection to localhost:31201 is 1
ReplSetTest await synced=true
adding shard addshard42
m30999| Wed Jun 13 22:31:12 [conn] starting new replica set monitor for replica set addshard42 with seed of tp2.10gen.cc:31202
m31202| Wed Jun 13 22:31:12 [initandlisten] connection accepted from 184.173.149.242:42755 #5 (5 connections now open)
m30999| Wed Jun 13 22:31:12 [conn] successfully connected to seed tp2.10gen.cc:31202 for replica set addshard42
m30999| Wed Jun 13 22:31:12 [conn] changing hosts to { 0: "tp2.10gen.cc:31201", 1: "tp2.10gen.cc:31200" } from addshard42/
m30999| Wed Jun 13 22:31:12 [conn] trying to add new host tp2.10gen.cc:31200 to replica set addshard42
m30999| Wed Jun 13 22:31:12 [conn] successfully connected to new host tp2.10gen.cc:31200 in replica set addshard42
m30999| Wed Jun 13 22:31:12 [conn] trying to add new host tp2.10gen.cc:31201 to replica set addshard42
m31200| Wed Jun 13 22:31:12 [initandlisten] connection accepted from 184.173.149.242:40992 #8 (5 connections now open)
m31201| Wed Jun 13 22:31:12 [initandlisten] connection accepted from 184.173.149.242:59314 #6 (5 connections now open)
m30999| Wed Jun 13 22:31:12 [conn] successfully connected to new host tp2.10gen.cc:31201 in replica set addshard42
m31202| Wed Jun 13 22:31:12 [initandlisten] connection accepted from 184.173.149.242:42758 #6 (6 connections now open)
m31202| Wed Jun 13 22:31:12 [conn5] end connection 184.173.149.242:42755 (5 connections now open)
m31200| Wed Jun 13 22:31:12 [initandlisten] connection accepted from 184.173.149.242:40995 #9 (6 connections now open)
m30999| Wed Jun 13 22:31:12 [conn] Primary for replica set addshard42 changed to tp2.10gen.cc:31200
m31201| Wed Jun 13 22:31:12 [initandlisten] connection accepted from 184.173.149.242:59317 #7 (6 connections now open)
m30999| Wed Jun 13 22:31:12 [conn] replica set monitor for replica set addshard42 started, address is addshard42/tp2.10gen.cc:31200,tp2.10gen.cc:31201
m31200| Wed Jun 13 22:31:12 [initandlisten] connection accepted from 184.173.149.242:40997 #10 (7 connections now open)
m30999| Wed Jun 13 22:31:12 [conn] going to add shard: { _id: "addshard42", host: "addshard42/tp2.10gen.cc:31200,tp2.10gen.cc:31201" }
true
m30000| Wed Jun 13 22:31:12 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Wed Jun 13 22:31:12 [interruptThread] now exiting
m30000| Wed Jun 13 22:31:12 dbexit:
m30000| Wed Jun 13 22:31:12 [interruptThread] shutdown: going to close listening sockets...
m30000| Wed Jun 13 22:31:12 [interruptThread] closing listening socket: 13
m30000| Wed Jun 13 22:31:12 [interruptThread] closing listening socket: 14
m30000| Wed Jun 13 22:31:12 [interruptThread] closing listening socket: 17
m30000| Wed Jun 13 22:31:12 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Wed Jun 13 22:31:12 [interruptThread] shutdown: going to flush diaglog...
m30000| Wed Jun 13 22:31:12 [interruptThread] shutdown: going to close sockets...
m30000| Wed Jun 13 22:31:12 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Wed Jun 13 22:31:12 [interruptThread] shutdown: closing all files...
m30000| Wed Jun 13 22:31:12 [interruptThread] closeAllFiles() finished
m30000| Wed Jun 13 22:31:12 [interruptThread] shutdown: removing fs lock...
m30000| Wed Jun 13 22:31:12 dbexit: really exiting now
m30999| Wed Jun 13 22:31:12 [CheckConfigServers] Socket say send() errno:32 Broken pipe 184.173.149.242:30000
m30999| Wed Jun 13 22:31:12 [CheckConfigServers] warning: couldn't check on config server:tp2.10gen.cc:30000 ok for now : 9001 socket exception [2] server [184.173.149.242:30000]
m30999| Wed Jun 13 22:31:13 [LockPinger] Socket say send() errno:32 Broken pipe 184.173.149.242:30000
m30999| Wed Jun 13 22:31:13 [LockPinger] warning: distributed lock pinger 'tp2.10gen.cc:30000/tp2.10gen.cc:30999:1339644612:1804289383' detected an exception while pinging. :: caused by :: socket exception
m30999| Wed Jun 13 22:31:13 [Balancer] DBClientCursor::init call() failed
m30999| Wed Jun 13 22:31:13 [Balancer] Assertion: 13632:couldn't get updated shard list from config server
m30999| Wed Jun 13 22:31:13 [Balancer] dev: lastError==0 won't report:couldn't get updated shard list from config server
m30999| 0x846782a 0x8676fb1 0x85ef1c0 0x8421b42 0x842011a 0x847eeaa 0x85127d1 0x8515478 0x8515390 0x8515316 0x8515298 0x84569ca 0xd5d919 0xca6d4e
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x26) [0x846782a]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5mongo10logContextEPKc+0x5b) [0x8676fb1]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5mongo11msgassertedEiPKc+0xee) [0x85ef1c0]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5mongo15StaticShardInfo6reloadEv+0x196) [0x8421b42]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5mongo5Shard15reloadShardInfoEv+0x20) [0x842011a]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5mongo8Balancer3runEv+0x1e0) [0x847eeaa]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0x2b1) [0x85127d1]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZNK5boost4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS3_9JobStatusEEEEclEPS3_S6_+0x68) [0x8515478]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5boost3_bi5list2INS0_5valueIPN5mongo13BackgroundJobEEENS2_INS_10shared_ptrINS4_9JobStatusEEEEEEclINS_4_mfi3mf1IvS4_S9_EENS0_5list0EEEvNS0_4typeIvEERT_RT0_i+0x72) [0x8515390]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5boost3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS5_9JobStatusEEEEENS0_5list2INS0_5valueIPS5_EENSB_IS8_EEEEEclEv+0x48) [0x8515316]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x22) [0x8515298]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos() [0x84569ca]
m30999| /lib/libpthread.so.0() [0xd5d919]
m30999| /lib/libc.so.6(clone+0x5e) [0xca6d4e]
m30999| Wed Jun 13 22:31:13 [Balancer] scoped connection to tp2.10gen.cc:30000 not being returned to the pool
m30999| Wed Jun 13 22:31:13 [Balancer] caught exception while doing balance: couldn't get updated shard list from config server
m31100| Wed Jun 13 22:31:13 [clientcursormon] mem (MB) res:36 virt:301 mapped:80
m31201| Wed Jun 13 22:31:13 [rsBackgroundSync] replSet syncing to: tp2.10gen.cc:31200
m31200| Wed Jun 13 22:31:13 [initandlisten] connection accepted from 184.173.149.242:40998 #11 (8 connections now open)
m31201| Wed Jun 13 22:31:13 [rsBackgroundSync] replSet remoteOldestOp: Jun 13 22:30:46 4fd95ae6:b
m31201| Wed Jun 13 22:31:13 [rsBackgroundSync] replSet lastOpTimeFetched: Jun 13 22:30:46 4fd95ae6:b
m31200| Wed Jun 13 22:31:13 [conn11] query has no more but tailable, cursorid: 257529004641975649
m31101| Wed Jun 13 22:31:13 [clientcursormon] mem (MB) res:36 virt:304 mapped:96
m31201| Wed Jun 13 22:31:13 [rsSyncNotifier] replset setting oplog notifier to tp2.10gen.cc:31200
m31200| Wed Jun 13 22:31:13 [initandlisten] connection accepted from 184.173.149.242:40999 #12 (9 connections now open)
m31200| Wed Jun 13 22:31:13 [conn12] query has no more but tailable, cursorid: 6194672184364596712
m31201| Wed Jun 13 22:31:13 [rsSync] replSet SECONDARY
m31102| Wed Jun 13 22:31:13 [clientcursormon] mem (MB) res:36 virt:304 mapped:96
m30001| Wed Jun 13 22:31:13 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Wed Jun 13 22:31:13 [interruptThread] now exiting
m30001| Wed Jun 13 22:31:13 dbexit:
m30001| Wed Jun 13 22:31:13 [interruptThread] shutdown: going to close listening sockets...
m30001| Wed Jun 13 22:31:13 [interruptThread] closing listening socket: 16
m30001| Wed Jun 13 22:31:13 [interruptThread] closing listening socket: 17
m30001| Wed Jun 13 22:31:13 [interruptThread] closing listening socket: 18
m30001| Wed Jun 13 22:31:13 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Wed Jun 13 22:31:13 [interruptThread] shutdown: going to flush diaglog...
m30001| Wed Jun 13 22:31:13 [interruptThread] shutdown: going to close sockets...
m30001| Wed Jun 13 22:31:13 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Wed Jun 13 22:31:13 [interruptThread] shutdown: closing all files...
m30001| Wed Jun 13 22:31:13 [interruptThread] closeAllFiles() finished
m30001| Wed Jun 13 22:31:13 [interruptThread] shutdown: removing fs lock...
m30001| Wed Jun 13 22:31:13 dbexit: really exiting now
m31200| Wed Jun 13 22:31:14 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state SECONDARY
m31202| Wed Jun 13 22:31:14 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state SECONDARY
m31200| Wed Jun 13 22:31:14 [slaveTracking] build index local.slaves { _id: 1 }
m31200| Wed Jun 13 22:31:14 [slaveTracking] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:31:14 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m31100| Wed Jun 13 22:31:14 [conn14] end connection 184.173.149.242:42667 (10 connections now open)
m31100| Wed Jun 13 22:31:14 [conn16] end connection 184.173.149.242:42673 (10 connections now open)
m31101| Wed Jun 13 22:31:14 [conn7] end connection 184.173.149.242:56380 (5 connections now open)
m31101| Wed Jun 13 22:31:14 [conn6] end connection 184.173.149.242:56377 (5 connections now open)
m31200| Wed Jun 13 22:31:14 [conn9] end connection 184.173.149.242:40995 (8 connections now open)
m31200| Wed Jun 13 22:31:14 [conn8] end connection 184.173.149.242:40992 (8 connections now open)
m31102| Wed Jun 13 22:31:14 [conn6] end connection 184.173.149.242:52539 (5 connections now open)
m31202| Wed Jun 13 22:31:14 [conn6] end connection 184.173.149.242:42758 (4 connections now open)
m31201| Wed Jun 13 22:31:14 [conn6] end connection 184.173.149.242:59314 (5 connections now open)
m31201| Wed Jun 13 22:31:14 [conn7] end connection 184.173.149.242:59317 (4 connections now open)
m31200| Wed Jun 13 22:31:14 [conn10] end connection 184.173.149.242:40997 (6 connections now open)
m31102| Wed Jun 13 22:31:14 [conn5] end connection 184.173.149.242:52536 (4 connections now open)
m31100| Wed Jun 13 22:31:14 [conn15] end connection 184.173.149.242:42670 (8 connections now open)
m31100| Wed Jun 13 22:31:15 got signal 15 (Terminated), will terminate after current cmd ends
m31100| Wed Jun 13 22:31:15 [interruptThread] now exiting
m31100| Wed Jun 13 22:31:15 dbexit:
m31100| Wed Jun 13 22:31:15 [interruptThread] shutdown: going to close listening sockets...
m31100| Wed Jun 13 22:31:15 [interruptThread] closing listening socket: 23
m31100| Wed Jun 13 22:31:15 [interruptThread] closing listening socket: 25
m31100| Wed Jun 13 22:31:15 [interruptThread] closing listening socket: 27
m31100| Wed Jun 13 22:31:15 [interruptThread] removing socket file: /tmp/mongodb-31100.sock
m31100| Wed Jun 13 22:31:15 [interruptThread] shutdown: going to flush diaglog...
m31100| Wed Jun 13 22:31:15 [interruptThread] shutdown: going to close sockets...
m31100| Wed Jun 13 22:31:15 [interruptThread] shutdown: waiting for fs preallocator...
m31100| Wed Jun 13 22:31:15 [interruptThread] shutdown: closing all files...
m31102| Wed Jun 13 22:31:15 [conn7] end connection 184.173.149.242:52552 (3 connections now open)
m31101| Wed Jun 13 22:31:15 [conn9] end connection 184.173.149.242:56401 (3 connections now open)
m31100| Wed Jun 13 22:31:15 [conn1] end connection 184.173.149.242:42643 (7 connections now open)
m31101| Wed Jun 13 22:31:15 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: tp2.10gen.cc:31100
m31102| Wed Jun 13 22:31:15 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: tp2.10gen.cc:31100
m31100| Wed Jun 13 22:31:15 [interruptThread] closeAllFiles() finished
m31100| Wed Jun 13 22:31:15 [interruptThread] shutdown: removing fs lock...
m31100| Wed Jun 13 22:31:15 dbexit: really exiting now
m31101| Wed Jun 13 22:31:16 got signal 15 (Terminated), will terminate after current cmd ends
m31101| Wed Jun 13 22:31:16 [interruptThread] now exiting
m31101| Wed Jun 13 22:31:16 dbexit:
m31101| Wed Jun 13 22:31:16 [interruptThread] shutdown: going to close listening sockets...
m31101| Wed Jun 13 22:31:16 [interruptThread] closing listening socket: 26
m31101| Wed Jun 13 22:31:16 [interruptThread] closing listening socket: 29
m31101| Wed Jun 13 22:31:16 [interruptThread] closing listening socket: 30
m31101| Wed Jun 13 22:31:16 [interruptThread] removing socket file: /tmp/mongodb-31101.sock
m31101| Wed Jun 13 22:31:16 [interruptThread] shutdown: going to flush diaglog...
m31101| Wed Jun 13 22:31:16 [interruptThread] shutdown: going to close sockets...
m31101| Wed Jun 13 22:31:16 [interruptThread] shutdown: waiting for fs preallocator...
m31101| Wed Jun 13 22:31:16 [interruptThread] shutdown: closing all files...
m31102| Wed Jun 13 22:31:16 [conn8] end connection 184.173.149.242:52553 (2 connections now open)
m31101| Wed Jun 13 22:31:16 [conn1] end connection 184.173.149.242:56355 (2 connections now open)
m31101| Wed Jun 13 22:31:16 [interruptThread] closeAllFiles() finished
m31101| Wed Jun 13 22:31:16 [interruptThread] shutdown: removing fs lock...
m31101| Wed Jun 13 22:31:16 dbexit: really exiting now
m31102| Wed Jun 13 22:31:17 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Wed Jun 13 22:31:17 [rsHealthPoll] replSet info tp2.10gen.cc:31101 is down (or slow to respond): DBClientBase::findN: transport error: tp2.10gen.cc:31101 ns: admin.$cmd query: { replSetHeartbeat: "addshard4", v: 1, pv: 1, checkEmpty: false, from: "tp2.10gen.cc:31102" }
m31102| Wed Jun 13 22:31:17 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state DOWN
m31102| Wed Jun 13 22:31:17 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Wed Jun 13 22:31:17 [rsHealthPoll] replSet info tp2.10gen.cc:31100 is down (or slow to respond): DBClientBase::findN: transport error: tp2.10gen.cc:31100 ns: admin.$cmd query: { replSetHeartbeat: "addshard4", v: 1, pv: 1, checkEmpty: false, from: "tp2.10gen.cc:31102" }
m31102| Wed Jun 13 22:31:17 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state DOWN
m31102| Wed Jun 13 22:31:17 got signal 15 (Terminated), will terminate after current cmd ends
m31102| Wed Jun 13 22:31:17 [interruptThread] now exiting
m31102| Wed Jun 13 22:31:17 dbexit:
m31102| Wed Jun 13 22:31:17 [interruptThread] shutdown: going to close listening sockets...
m31102| Wed Jun 13 22:31:17 [interruptThread] closing listening socket: 29
m31102| Wed Jun 13 22:31:17 [interruptThread] closing listening socket: 32
m31102| Wed Jun 13 22:31:17 [interruptThread] closing listening socket: 33
m31102| Wed Jun 13 22:31:17 [interruptThread] removing socket file: /tmp/mongodb-31102.sock
m31102| Wed Jun 13 22:31:17 [interruptThread] shutdown: going to flush diaglog...
m31102| Wed Jun 13 22:31:17 [interruptThread] shutdown: going to close sockets...
m31102| Wed Jun 13 22:31:17 [interruptThread] shutdown: waiting for fs preallocator...
m31102| Wed Jun 13 22:31:17 [interruptThread] shutdown: closing all files...
m31102| Wed Jun 13 22:31:17 [conn1] end connection 184.173.149.242:52516 (1 connection now open)
m31102| Wed Jun 13 22:31:17 [interruptThread] closeAllFiles() finished
m31102| Wed Jun 13 22:31:17 [interruptThread] shutdown: removing fs lock...
m31102| Wed Jun 13 22:31:17 dbexit: really exiting now
m31200| Wed Jun 13 22:31:18 got signal 15 (Terminated), will terminate after current cmd ends
m31200| Wed Jun 13 22:31:18 [interruptThread] now exiting
m31200| Wed Jun 13 22:31:18 dbexit:
m31200| Wed Jun 13 22:31:18 [interruptThread] shutdown: going to close listening sockets...
m31200| Wed Jun 13 22:31:18 [interruptThread] closing listening socket: 32
m31200| Wed Jun 13 22:31:18 [interruptThread] closing listening socket: 36
m31200| Wed Jun 13 22:31:18 [interruptThread] closing listening socket: 37
m31200| Wed Jun 13 22:31:18 [interruptThread] removing socket file: /tmp/mongodb-31200.sock
m31200| Wed Jun 13 22:31:18 [interruptThread] shutdown: going to flush diaglog...
m31200| Wed Jun 13 22:31:18 [interruptThread] shutdown: going to close sockets...
m31200| Wed Jun 13 22:31:18 [interruptThread] shutdown: waiting for fs preallocator...
m31200| Wed Jun 13 22:31:18 [interruptThread] shutdown: closing all files...
m31200| Wed Jun 13 22:31:18 [conn1] end connection 184.173.149.242:40967 (5 connections now open)
m31201| Wed Jun 13 22:31:18 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: tp2.10gen.cc:31200
m31201| Wed Jun 13 22:31:18 [conn5] end connection 184.173.149.242:59308 (3 connections now open)
m31202| Wed Jun 13 22:31:18 [conn3] end connection 184.173.149.242:42740 (3 connections now open)
m31200| Wed Jun 13 22:31:18 [interruptThread] closeAllFiles() finished
m31200| Wed Jun 13 22:31:18 [interruptThread] shutdown: removing fs lock...
m31200| Wed Jun 13 22:31:18 dbexit: really exiting now
m31201| Wed Jun 13 22:31:19 got signal 15 (Terminated), will terminate after current cmd ends
m31201| Wed Jun 13 22:31:19 [interruptThread] now exiting
m31201| Wed Jun 13 22:31:19 dbexit:
m31201| Wed Jun 13 22:31:19 [interruptThread] shutdown: going to close listening sockets...
m31201| Wed Jun 13 22:31:19 [interruptThread] closing listening socket: 35
m31201| Wed Jun 13 22:31:19 [interruptThread] closing listening socket: 39
m31201| Wed Jun 13 22:31:19 [interruptThread] closing listening socket: 40
m31201| Wed Jun 13 22:31:19 [interruptThread] removing socket file: /tmp/mongodb-31201.sock
m31201| Wed Jun 13 22:31:19 [interruptThread] shutdown: going to flush diaglog...
m31201| Wed Jun 13 22:31:19 [interruptThread] shutdown: going to close sockets...
m31201| Wed Jun 13 22:31:19 [interruptThread] shutdown: waiting for fs preallocator...
m31201| Wed Jun 13 22:31:19 [interruptThread] shutdown: closing all files...
m31202| Wed Jun 13 22:31:19 [conn4] end connection 184.173.149.242:42746 (2 connections now open)
m31201| Wed Jun 13 22:31:19 [conn1] end connection 184.173.149.242:59291 (2 connections now open)
m31201| Wed Jun 13 22:31:19 [interruptThrea 69444.476128ms
Wed Jun 13 22:31:21 [initandlisten] connection accepted from 127.0.0.1:53990 #5 (4 connections now open)
*******************************************
Test : addshard5.js ...
Command : /home/yellow/buildslave/Linux_32bit_debug/mongo/mongo --port 27999 --nodb /home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/addshard5.js --eval TestData = new Object();TestData.testPath = "/home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/addshard5.js";TestData.testFile = "addshard5.js";TestData.testName = "addshard5";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Wed Jun 13 22:31:21 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/test0'
Wed Jun 13 22:31:21 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30000 --dbpath /data/db/test0
m30000| Wed Jun 13 22:31:21
m30000| Wed Jun 13 22:31:21 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Wed Jun 13 22:31:21
m30000| Wed Jun 13 22:31:21 [initandlisten] MongoDB starting : pid=9975 port=30000 dbpath=/data/db/test0 32-bit host=tp2.10gen.cc
m30000| Wed Jun 13 22:31:21 [initandlisten] _DEBUG build (which is slower)
m30000| Wed Jun 13 22:31:21 [initandlisten]
m30000| Wed Jun 13 22:31:21 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Wed Jun 13 22:31:21 [initandlisten] ** Not recommended for production.
m30000| Wed Jun 13 22:31:21 [initandlisten]
m30000| Wed Jun 13 22:31:21 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Wed Jun 13 22:31:21 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Wed Jun 13 22:31:21 [initandlisten] ** with --journal, the limit is lower
m30000| Wed Jun 13 22:31:21 [initandlisten]
m30000| Wed Jun 13 22:31:21 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Wed Jun 13 22:31:21 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Wed Jun 13 22:31:21 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30000| Wed Jun 13 22:31:21 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
m30000| Wed Jun 13 22:31:21 [initandlisten] opening db: local
m30000| Wed Jun 13 22:31:21 [initandlisten] waiting for connections on port 30000
m30000| Wed Jun 13 22:31:21 [websvr] admin web console waiting for connections on port 31000
m30000| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:56952 #1 (1 connection now open)
Resetting db path '/data/db/test1'
Wed Jun 13 22:31:22 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30001 --dbpath /data/db/test1
m30001| Wed Jun 13 22:31:22
m30001| Wed Jun 13 22:31:22 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Wed Jun 13 22:31:22
m30001| Wed Jun 13 22:31:22 [initandlisten] MongoDB starting : pid=9988 port=30001 dbpath=/data/db/test1 32-bit host=tp2.10gen.cc
m30001| Wed Jun 13 22:31:22 [initandlisten] _DEBUG build (which is slower)
m30001| Wed Jun 13 22:31:22 [initandlisten]
m30001| Wed Jun 13 22:31:22 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Wed Jun 13 22:31:22 [initandlisten] ** Not recommended for production.
m30001| Wed Jun 13 22:31:22 [initandlisten]
m30001| Wed Jun 13 22:31:22 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Wed Jun 13 22:31:22 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Wed Jun 13 22:31:22 [initandlisten] ** with --journal, the limit is lower
m30001| Wed Jun 13 22:31:22 [initandlisten]
m30001| Wed Jun 13 22:31:22 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Wed Jun 13 22:31:22 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Wed Jun 13 22:31:22 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30001| Wed Jun 13 22:31:22 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
m30001| Wed Jun 13 22:31:22 [initandlisten] opening db: local
m30001| Wed Jun 13 22:31:22 [initandlisten] waiting for connections on port 30001
m30001| Wed Jun 13 22:31:22 [websvr] admin web console waiting for connections on port 31001
m30001| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:51216 #1 (1 connection now open)
Resetting db path '/data/db/test2'
Wed Jun 13 22:31:22 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30002 --dbpath /data/db/test2
m30002| Wed Jun 13 22:31:22
m30002| Wed Jun 13 22:31:22 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Wed Jun 13 22:31:22
m30002| Wed Jun 13 22:31:22 [initandlisten] MongoDB starting : pid=10001 port=30002 dbpath=/data/db/test2 32-bit host=tp2.10gen.cc
m30002| Wed Jun 13 22:31:22 [initandlisten] _DEBUG build (which is slower)
m30002| Wed Jun 13 22:31:22 [initandlisten]
m30002| Wed Jun 13 22:31:22 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Wed Jun 13 22:31:22 [initandlisten] ** Not recommended for production.
m30002| Wed Jun 13 22:31:22 [initandlisten]
m30002| Wed Jun 13 22:31:22 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Wed Jun 13 22:31:22 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Wed Jun 13 22:31:22 [initandlisten] ** with --journal, the limit is lower
m30002| Wed Jun 13 22:31:22 [initandlisten]
m30002| Wed Jun 13 22:31:22 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Wed Jun 13 22:31:22 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Wed Jun 13 22:31:22 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30002| Wed Jun 13 22:31:22 [initandlisten] options: { dbpath: "/data/db/test2", port: 30002 }
m30002| Wed Jun 13 22:31:22 [initandlisten] opening db: local
m30002| Wed Jun 13 22:31:22 [initandlisten] waiting for connections on port 30002
m30002| Wed Jun 13 22:31:22 [websvr] admin web console waiting for connections on port 31002
m30002| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:59365 #1 (1 connection now open)
Resetting db path '/data/db/test-config0'
Wed Jun 13 22:31:22 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 29000 --dbpath /data/db/test-config0
m29000| Wed Jun 13 22:31:22
m29000| Wed Jun 13 22:31:22 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Wed Jun 13 22:31:22
m29000| Wed Jun 13 22:31:22 [initandlisten] MongoDB starting : pid=10014 port=29000 dbpath=/data/db/test-config0 32-bit host=tp2.10gen.cc
m29000| Wed Jun 13 22:31:22 [initandlisten] _DEBUG build (which is slower)
m29000| Wed Jun 13 22:31:22 [initandlisten]
m29000| Wed Jun 13 22:31:22 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Wed Jun 13 22:31:22 [initandlisten] ** Not recommended for production.
m29000| Wed Jun 13 22:31:22 [initandlisten]
m29000| Wed Jun 13 22:31:22 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Wed Jun 13 22:31:22 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Wed Jun 13 22:31:22 [initandlisten] ** with --journal, the limit is lower
m29000| Wed Jun 13 22:31:22 [initandlisten]
m29000| Wed Jun 13 22:31:22 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Wed Jun 13 22:31:22 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Wed Jun 13 22:31:22 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m29000| Wed Jun 13 22:31:22 [initandlisten] options: { dbpath: "/data/db/test-config0", port: 29000 }
m29000| Wed Jun 13 22:31:22 [initandlisten] opening db: local
m29000| Wed Jun 13 22:31:22 [initandlisten] waiting for connections on port 29000
m29000| Wed Jun 13 22:31:22 [websvr] ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:30000
m29000| Wed Jun 13 22:31:22 [websvr] ERROR: addr already in use
m29000| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:35928 #1 (1 connection now open)
"localhost:29000"
m29000| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:35929 #2 (2 connections now open)
m29000| Wed Jun 13 22:31:22 [conn2] opening db: config
ShardingTest test :
{
"config" : "localhost:29000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001,
connection to localhost:30002
]
}
m29000| Wed Jun 13 22:31:22 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes...
m29000| Wed Jun 13 22:31:22 [FileAllocator] creating directory /data/db/test-config0/_tmp
Wed Jun 13 22:31:22 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos --port 30999 --configdb localhost:29000
m30999| Wed Jun 13 22:31:22 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Wed Jun 13 22:31:22 [mongosMain] MongoS version 2.1.2-pre- starting: pid=10029 port=30999 32-bit host=tp2.10gen.cc (--help for usage)
m30999| Wed Jun 13 22:31:22 [mongosMain] _DEBUG build
m30999| Wed Jun 13 22:31:22 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Wed Jun 13 22:31:22 [mongosMain] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30999| Wed Jun 13 22:31:22 [mongosMain] options: { configdb: "localhost:29000", port: 30999 }
m29000| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:35931 #3 (3 connections now open)
m29000| Wed Jun 13 22:31:22 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.035 secs
m29000| Wed Jun 13 22:31:22 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes...
m29000| Wed Jun 13 22:31:22 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0.037 secs
m29000| Wed Jun 13 22:31:22 [conn2] datafileheader::init initializing /data/db/test-config0/config.0 n:0
m29000| Wed Jun 13 22:31:22 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes...
m29000| Wed Jun 13 22:31:22 [conn2] build index config.settings { _id: 1 }
m29000| Wed Jun 13 22:31:22 [conn2] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:35932 #4 (4 connections now open)
m29000| Wed Jun 13 22:31:22 [conn4] build index config.version { _id: 1 }
m29000| Wed Jun 13 22:31:22 [conn4] build index done. scanned 0 total records. 0.006 secs
m30999| Wed Jun 13 22:31:22 [mongosMain] waiting for connections on port 30999
m29000| Wed Jun 13 22:31:22 [conn3] build index config.chunks { _id: 1 }
m29000| Wed Jun 13 22:31:22 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:22 [conn3] info: creating collection config.chunks on add index
m29000| Wed Jun 13 22:31:22 [conn3] build index config.chunks { ns: 1, min: 1 }
m29000| Wed Jun 13 22:31:22 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:22 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Wed Jun 13 22:31:22 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:22 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Wed Jun 13 22:31:22 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:22 [conn3] build index config.shards { _id: 1 }
m29000| Wed Jun 13 22:31:22 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:22 [conn3] info: creating collection config.shards on add index
m29000| Wed Jun 13 22:31:22 [conn3] build index config.shards { host: 1 }
m29000| Wed Jun 13 22:31:22 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:31:22 [websvr] admin web console waiting for connections on port 31999
m30999| Wed Jun 13 22:31:22 [Balancer] about to contact config servers and shards
m30999| Wed Jun 13 22:31:22 [Balancer] config servers and shards contacted successfully
m30999| Wed Jun 13 22:31:22 [Balancer] balancer id: tp2.10gen.cc:30999 started at Jun 13 22:31:22
m30999| Wed Jun 13 22:31:22 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Wed Jun 13 22:31:22 [conn4] build index config.mongos { _id: 1 }
m29000| Wed Jun 13 22:31:22 [conn4] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:35933 #5 (5 connections now open)
m30999| Wed Jun 13 22:31:22 [LockPinger] creating distributed lock ping thread for localhost:29000 and process tp2.10gen.cc:30999:1339644682:1804289383 (sleeping for 30000ms)
m29000| Wed Jun 13 22:31:22 [conn3] build index config.lockpings { _id: 1 }
m29000| Wed Jun 13 22:31:22 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:22 [conn5] build index config.locks { _id: 1 }
m29000| Wed Jun 13 22:31:22 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:22 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0.068 secs
m29000| Wed Jun 13 22:31:22 [conn3] build index config.lockpings { ping: 1 }
m29000| Wed Jun 13 22:31:22 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Wed Jun 13 22:31:22 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644682:1804289383' acquired, ts : 4fd95b0a0cb6971935d91e13
m30999| Wed Jun 13 22:31:22 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644682:1804289383' unlocked.
m30999| Wed Jun 13 22:31:22 [mongosMain] connection accepted from 127.0.0.1:50121 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Wed Jun 13 22:31:22 [conn] couldn't find database [admin] in config db
m29000| Wed Jun 13 22:31:22 [conn3] build index config.databases { _id: 1 }
m29000| Wed Jun 13 22:31:22 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:31:22 [conn] put [admin] on: config:localhost:29000
m30000| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:56965 #2 (2 connections now open)
m30999| Wed Jun 13 22:31:22 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:51228 #2 (2 connections now open)
m30999| Wed Jun 13 22:31:22 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30002
m30002| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:59376 #2 (2 connections now open)
m30999| Wed Jun 13 22:31:22 [conn] going to add shard: { _id: "shard0002", host: "localhost:30002" }
{ "shardAdded" : "shard0002", "ok" : 1 }
m30000| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:56968 #3 (3 connections now open)
m30999| Wed Jun 13 22:31:22 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd95b0a0cb6971935d91e12
m30001| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:51231 #3 (3 connections now open)
m30999| Wed Jun 13 22:31:22 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd95b0a0cb6971935d91e12
m30002| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:59379 #3 (3 connections now open)
m30999| Wed Jun 13 22:31:22 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd95b0a0cb6971935d91e12
m29000| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:35941 #6 (6 connections now open)
m30999| Wed Jun 13 22:31:22 [conn] creating WriteBackListener for: localhost:29000 serverID: 4fd95b0a0cb6971935d91e12
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
m30000| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:56972 #4 (4 connections now open)
m30001| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:51235 #4 (4 connections now open)
m30002| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:59383 #4 (4 connections now open)
m30999| Wed Jun 13 22:31:22 [conn] going to start draining shard: shard0002
m30999| primaryLocalDoc: { _id: "local", primary: "shard0002" }
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "shard0002",
"ok" : 1
}
m30999| Wed Jun 13 22:31:22 [conn] going to remove shard: shard0002
{
"msg" : "removeshard completed successfully",
"state" : "completed",
"shard" : "shard0002",
"ok" : 1
}
m30999| Wed Jun 13 22:31:22 [conn] couldn't find database [foo] in config db
m30000| Wed Jun 13 22:31:22 [initandlisten] connection accepted from 127.0.0.1:56975 #5 (5 connections now open)
m30001| Wed Jun 13 22:31:23 [initandlisten] connection accepted from 127.0.0.1:51238 #5 (5 connections now open)
m30999| Wed Jun 13 22:31:23 [conn] put [foo] on: shard0000:localhost:30000
m30999| Wed Jun 13 22:31:23 [conn] enabling sharding on: foo
{ "ok" : 1 }
{ "ok" : 0, "errmsg" : "it is already the primary" }
m30000| Wed Jun 13 22:31:23 [conn5] _DEBUG ReadContext db wasn't open, will try to open foo.system.indexes
m30000| Wed Jun 13 22:31:23 [conn5] opening db: foo
m30999| Wed Jun 13 22:31:23 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { _id: 1.0 } }
m30999| Wed Jun 13 22:31:23 [conn] enable sharding on: foo.bar with shard key: { _id: 1.0 }
m30999| Wed Jun 13 22:31:23 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd95b0b0cb6971935d91e14
m30000| Wed Jun 13 22:31:23 [FileAllocator] allocating new datafile /data/db/test0/foo.ns, filling with zeroes...
m30000| Wed Jun 13 22:31:23 [FileAllocator] creating directory /data/db/test0/_tmp
m30999| Wed Jun 13 22:31:23 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd95b0b0cb6971935d91e14 based on: (empty)
m30999| Wed Jun 13 22:31:23 [conn] DEV WARNING appendDate() called with a tiny (but nonzero) date
m29000| Wed Jun 13 22:31:23 [conn3] build index config.collections { _id: 1 }
m29000| Wed Jun 13 22:31:23 [conn3] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:23 [FileAllocator] done allocating datafile /data/db/test0/foo.ns, size: 16MB, took 0.046 secs
m30000| Wed Jun 13 22:31:23 [FileAllocator] allocating new datafile /data/db/test0/foo.0, filling with zeroes...
m30000| Wed Jun 13 22:31:23 [FileAllocator] done allocating datafile /data/db/test0/foo.0, size: 16MB, took 0.047 secs
m30000| Wed Jun 13 22:31:23 [conn5] datafileheader::init initializing /data/db/test0/foo.0 n:0
m30000| Wed Jun 13 22:31:23 [FileAllocator] allocating new datafile /data/db/test0/foo.1, filling with zeroes...
m30000| Wed Jun 13 22:31:23 [conn5] build index foo.bar { _id: 1 }
m30000| Wed Jun 13 22:31:23 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:23 [conn5] info: creating collection foo.bar on add index
m30000| Wed Jun 13 22:31:23 [conn5] insert foo.system.indexes keyUpdates:0 locks(micros) W:123 r:545 w:103090 103ms
m30000| Wed Jun 13 22:31:23 [conn3] no current chunk manager found for this shard, will initialize
m29000| Wed Jun 13 22:31:23 [initandlisten] connection accepted from 127.0.0.1:35947 #7 (7 connections now open)
m30999| Wed Jun 13 22:31:23 [conn] resetting shard version of foo.bar on localhost:30001, version is zero
{ "collectionsharded" : "foo.bar", "ok" : 1 }
m30999| Wed Jun 13 22:31:23 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 0.0 }, to: "shard0001" }
m30999| Wed Jun 13 22:31:23 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
m29000| Wed Jun 13 22:31:23 [initandlisten] connection accepted from 127.0.0.1:35948 #8 (8 connections now open)
m30000| Wed Jun 13 22:31:23 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" }
m30000| Wed Jun 13 22:31:23 [conn5] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:31:23 [LockPinger] creating distributed lock ping thread for localhost:29000 and process tp2.10gen.cc:30000:1339644683:1033413719 (sleeping for 30000ms)
m30000| Wed Jun 13 22:31:23 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644683:1033413719' acquired, ts : 4fd95b0b543019fa5ac20ad6
m30000| Wed Jun 13 22:31:23 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:23-0", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:56975", time: new Date(1339644683131), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Wed Jun 13 22:31:23 [conn5] moveChunk request accepted at version 1|0||4fd95b0b0cb6971935d91e14
m30000| Wed Jun 13 22:31:23 [conn5] moveChunk number of documents: 1
m30001| Wed Jun 13 22:31:23 [initandlisten] connection accepted from 127.0.0.1:51241 #6 (6 connections now open)
m30001| Wed Jun 13 22:31:23 [conn6] opening db: admin
m30000| Wed Jun 13 22:31:23 [initandlisten] connection accepted from 127.0.0.1:56980 #6 (6 connections now open)
m30001| Wed Jun 13 22:31:23 [migrateThread] opening db: foo
m30001| Wed Jun 13 22:31:23 [FileAllocator] allocating new datafile /data/db/test1/foo.ns, filling with zeroes...
m30001| Wed Jun 13 22:31:23 [FileAllocator] creating directory /data/db/test1/_tmp
m30000| Wed Jun 13 22:31:23 [FileAllocator] done allocating datafile /data/db/test0/foo.1, size: 32MB, took 0.192 secs
m30001| Wed Jun 13 22:31:23 [FileAllocator] done allocating datafile /data/db/test1/foo.ns, size: 16MB, took 0.039 secs
m30001| Wed Jun 13 22:31:23 [FileAllocator] allocating new datafile /data/db/test1/foo.0, filling with zeroes...
m30001| Wed Jun 13 22:31:23 [FileAllocator] done allocating datafile /data/db/test1/foo.0, size: 16MB, took 0.053 secs
m30001| Wed Jun 13 22:31:23 [migrateThread] datafileheader::init initializing /data/db/test1/foo.0 n:0
m30001| Wed Jun 13 22:31:23 [FileAllocator] allocating new datafile /data/db/test1/foo.1, filling with zeroes...
m30001| Wed Jun 13 22:31:23 [migrateThread] build index foo.bar { _id: 1 }
m30001| Wed Jun 13 22:31:23 [migrateThread] build index done. scanned 0 total records. 0.246 secs
m30001| Wed Jun 13 22:31:23 [migrateThread] info: creating collection foo.bar on add index
m30001| Wed Jun 13 22:31:23 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: MinKey } -> { _id: MaxKey }
m30001| Wed Jun 13 22:31:23 [FileAllocator] done allocating datafile /data/db/test1/foo.1, size: 32MB, took 0.276 secs
m30000| Wed Jun 13 22:31:24 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Wed Jun 13 22:31:24 [conn5] moveChunk setting version to: 2|0||4fd95b0b0cb6971935d91e14
m30001| Wed Jun 13 22:31:24 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: MinKey } -> { _id: MaxKey }
m30001| Wed Jun 13 22:31:24 [migrateThread] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:24-0", server: "tp2.10gen.cc", clientAddr: ":27017", time: new Date(1339644684141), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 5: 510, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 496 } }
m29000| Wed Jun 13 22:31:24 [initandlisten] connection accepted from 127.0.0.1:35951 #9 (9 connections now open)
m30000| Wed Jun 13 22:31:24 [conn5] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Wed Jun 13 22:31:24 [conn5] moveChunk moved last chunk out for collection 'foo.bar'
m29000| Wed Jun 13 22:31:24 [conn8] info PageFaultRetryableSection will not yield, already locked upon reaching
m30000| Wed Jun 13 22:31:24 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:24-1", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:56975", time: new Date(1339644684143), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Wed Jun 13 22:31:24 [conn5] doing delete inline
m30000| Wed Jun 13 22:31:24 [conn5] moveChunk deleted: 1
m30000| Wed Jun 13 22:31:24 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644683:1033413719' unlocked.
m30000| Wed Jun 13 22:31:24 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:24-2", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:56975", time: new Date(1339644684145), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 14, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 10, step6 of 6: 0 } }
m30000| Wed Jun 13 22:31:24 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) W:123 r:732 w:103505 reslen:37 1028ms
m30999| Wed Jun 13 22:31:24 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 3 version: 2|0||4fd95b0b0cb6971935d91e14 based on: 1|0||4fd95b0b0cb6971935d91e14
{ "millis" : 1030, "ok" : 1 }
m30999| Wed Jun 13 22:31:24 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 0.0 }, to: "shard0000" }
m30999| Wed Jun 13 22:31:24 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0001:localhost:30001 lastmod: 2|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m29000| Wed Jun 13 22:31:24 [initandlisten] connection accepted from 127.0.0.1:35952 #10 (10 connections now open)
m30001| Wed Jun 13 22:31:24 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" }
m30001| Wed Jun 13 22:31:24 [conn5] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Wed Jun 13 22:31:24 [LockPinger] creating distributed lock ping thread for localhost:29000 and process tp2.10gen.cc:30001:1339644684:162325755 (sleeping for 30000ms)
m30001| Wed Jun 13 22:31:24 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30001:1339644684:162325755' acquired, ts : 4fd95b0cb0a7d0a74c2946c0
m30001| Wed Jun 13 22:31:24 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:24-1", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:51238", time: new Date(1339644684154), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m29000| Wed Jun 13 22:31:24 [initandlisten] connection accepted from 127.0.0.1:35953 #11 (11 connections now open)
m30001| Wed Jun 13 22:31:24 [conn5] no current chunk manager found for this shard, will initialize
m30001| Wed Jun 13 22:31:24 [conn5] moveChunk request accepted at version 2|0||4fd95b0b0cb6971935d91e14
m30001| Wed Jun 13 22:31:24 [conn5] moveChunk number of documents: 1
m30000| Wed Jun 13 22:31:24 [conn6] opening db: admin
m30000| Wed Jun 13 22:31:24 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: MinKey } -> { _id: MaxKey }
m30001| Wed Jun 13 22:31:25 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Wed Jun 13 22:31:25 [conn5] moveChunk setting version to: 3|0||4fd95b0b0cb6971935d91e14
m30000| Wed Jun 13 22:31:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: MinKey } -> { _id: MaxKey }
m30000| Wed Jun 13 22:31:25 [migrateThread] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:25-3", server: "tp2.10gen.cc", clientAddr: ":27017", time: new Date(1339644685164), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1004 } }
m30001| Wed Jun 13 22:31:25 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "foo.bar", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Wed Jun 13 22:31:25 [conn5] moveChunk moved last chunk out for collection 'foo.bar'
m30001| Wed Jun 13 22:31:25 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:25-2", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:51238", time: new Date(1339644685165), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Wed Jun 13 22:31:25 [conn5] doing delete inline
m30001| Wed Jun 13 22:31:25 [conn5] moveChunk deleted: 1
m30001| Wed Jun 13 22:31:25 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30001:1339644684:162325755' unlocked.
m30001| Wed Jun 13 22:31:25 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:25-3", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:51238", time: new Date(1339644685167), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 7, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 6, step6 of 6: 0 } }
m30001| Wed Jun 13 22:31:25 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) r:253 w:377 reslen:37 1017ms
m30999| Wed Jun 13 22:31:25 [conn] ChunkManager: time to load chunks for foo.bar: 20ms sequenceNumber: 4 version: 3|0||4fd95b0b0cb6971935d91e14 based on: 2|0||4fd95b0b0cb6971935d91e14
{ "millis" : 1040, "ok" : 1 }
m30999| Wed Jun 13 22:31:25 [conn] going to start draining shard: shard0001
m30999| primaryLocalDoc: { _id: "local", primary: "shard0001" }
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "shard0001",
"ok" : 1
}
m30999| Wed Jun 13 22:31:25 [conn] going to remove shard: shard0001
{
"msg" : "removeshard completed successfully",
"state" : "completed",
"shard" : "shard0001",
"ok" : 1
}
m30002| Wed Jun 13 22:31:25 [initandlisten] connection accepted from 127.0.0.1:59393 #5 (5 connections now open)
m30999| Wed Jun 13 22:31:25 [conn] going to add shard: { _id: "shard0001", host: "localhost:30002" }
{ "shardAdded" : "shard0001", "ok" : 1 }
----
Shard was dropped and re-added with same name...
----
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30002" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "foo", "partitioned" : true, "primary" : "shard0000" }
foo.bar chunks:
shard0000 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : { $maxKey : 1 } } on : shard0000 Timestamp(3000, 0)
m30999| Wed Jun 13 22:31:25 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 0.0 }, to: "shard0001" }
m30999| Wed Jun 13 22:31:25 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0000:localhost:30000 lastmod: 3|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30002
m30000| Wed Jun 13 22:31:25 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" }
m30000| Wed Jun 13 22:31:25 [conn5] created new distributed lock for foo.bar on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:31:25 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644683:1033413719' acquired, ts : 4fd95b0d543019fa5ac20ad7
m30000| Wed Jun 13 22:31:25 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:25-4", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:56975", time: new Date(1339644685230), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Wed Jun 13 22:31:25 [conn5] moveChunk request accepted at version 3|0||4fd95b0b0cb6971935d91e14
m30000| Wed Jun 13 22:31:25 [conn5] moveChunk number of documents: 1
m30002| Wed Jun 13 22:31:25 [initandlisten] connection accepted from 127.0.0.1:59394 #6 (6 connections now open)
m30002| Wed Jun 13 22:31:25 [conn6] opening db: admin
m30000| Wed Jun 13 22:31:25 [initandlisten] connection accepted from 127.0.0.1:56986 #7 (7 connections now open)
m30002| Wed Jun 13 22:31:25 [migrateThread] opening db: foo
m30002| Wed Jun 13 22:31:25 [FileAllocator] allocating new datafile /data/db/test2/foo.ns, filling with zeroes...
m30002| Wed Jun 13 22:31:25 [FileAllocator] creating directory /data/db/test2/_tmp
m30002| Wed Jun 13 22:31:25 [FileAllocator] done allocating datafile /data/db/test2/foo.ns, size: 16MB, took 0.035 secs
m30002| Wed Jun 13 22:31:25 [FileAllocator] allocating new datafile /data/db/test2/foo.0, filling with zeroes...
m30002| Wed Jun 13 22:31:25 [FileAllocator] done allocating datafile /data/db/test2/foo.0, size: 16MB, took 0.039 secs
m30002| Wed Jun 13 22:31:25 [migrateThread] datafileheader::init initializing /data/db/test2/foo.0 n:0
m30002| Wed Jun 13 22:31:25 [FileAllocator] allocating new datafile /data/db/test2/foo.1, filling with zeroes...
m30002| Wed Jun 13 22:31:25 [migrateThread] build index foo.bar { _id: 1 }
m30002| Wed Jun 13 22:31:25 [migrateThread] build index done. scanned 0 total records. 0 secs
m30002| Wed Jun 13 22:31:25 [migrateThread] info: creating collection foo.bar on add index
m30002| Wed Jun 13 22:31:25 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: MinKey } -> { _id: MaxKey }
m30002| Wed Jun 13 22:31:25 [FileAllocator] done allocating datafile /data/db/test2/foo.1, size: 32MB, took 0.069 secs
m30000| Wed Jun 13 22:31:26 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Wed Jun 13 22:31:26 [conn5] moveChunk setting version to: 4|0||4fd95b0b0cb6971935d91e14
m30002| Wed Jun 13 22:31:26 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: MinKey } -> { _id: MaxKey }
m30002| Wed Jun 13 22:31:26 [migrateThread] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:26-0", server: "tp2.10gen.cc", clientAddr: ":27017", time: new Date(1339644686242), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 5: 88, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 920 } }
m30000| Wed Jun 13 22:31:26 [conn5] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 39, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Wed Jun 13 22:31:26 [conn5] moveChunk moved last chunk out for collection 'foo.bar'
m29000| Wed Jun 13 22:31:26 [initandlisten] connection accepted from 127.0.0.1:35957 #12 (12 connections now open)
m30000| Wed Jun 13 22:31:26 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:26-5", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:56975", time: new Date(1339644686244), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
m30000| Wed Jun 13 22:31:26 [conn5] doing delete inline
m30000| Wed Jun 13 22:31:26 [conn5] moveChunk deleted: 1
m30000| Wed Jun 13 22:31:26 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644683:1033413719' unlocked.
m30000| Wed Jun 13 22:31:26 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:26-6", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:56975", time: new Date(1339644686245), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 11, step6 of 6: 0 } }
m30000| Wed Jun 13 22:31:26 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_MinKey", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 locks(micros) W:123 r:928 w:103830 reslen:37 1017ms
m30999| Wed Jun 13 22:31:26 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 5 version: 4|0||4fd95b0b0cb6971935d91e14 based on: 3|0||4fd95b0b0cb6971935d91e14
{ "millis" : 1019, "ok" : 1 }
m30999| Wed Jun 13 22:31:26 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Wed Jun 13 22:31:26 [conn5] end connection 127.0.0.1:35933 (11 connections now open)
m30000| Wed Jun 13 22:31:26 [conn3] warning: DBException thrown :: caused by :: 9001 socket exception
m29000| Wed Jun 13 22:31:26 [conn3] end connection 127.0.0.1:35931 (11 connections now open)
m30002| Wed Jun 13 22:31:26 [conn5] warning: DBException thrown :: caused by :: 9001 socket exception
m30000| Wed Jun 13 22:31:26 [conn5] warning: DBException thrown :: caused by :: 9001 socket exception
m30001| Wed Jun 13 22:31:26 [conn5] end connection 127.0.0.1:51238 (5 connections now open)
m30001| Wed Jun 13 22:31:26 [conn3] end connection 127.0.0.1:51231 (4 connections now open)
m30002| Wed Jun 13 22:31:26 [conn3] warning: DBException thrown :: caused by :: 9001 socket exception
m29000| Wed Jun 13 22:31:26 [conn6] end connection 127.0.0.1:35941 (9 connections now open)
m29000| Wed Jun 13 22:31:26 [conn4] end connection 127.0.0.1:35932 (8 connections now open)
m30000| 0x886c49a 0x8561b5a 0x864d156 0x877e818 0x88d74b3 0x85cef33 0x8771e6e 0xd5d919 0xca6d4e
m30002| 0x886c49a 0x8561b5a 0x864d156 0x877e818 0x88d74b3 0x85cef33 0x8771e6e 0xd5d919 0x1ecd4e
m30000| 0x886c49a 0x8561b5a 0x864d156 0x877e818 0x88d74b3 0x85cef33 0x8771e6e 0xd5d919 0xca6d4e
m30002| 0x886c49a 0x8561b5a 0x864d156 0x877e818 0x88d74b3 0x85cef33 0x8771e6e 0xd5d919 0x1ecd4e
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo15printStackTraceERSo+0x26) [0x886c49a]
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0xd2) [0x8561b5a]
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo11DBExceptionC2EPKci+0x54) [0x864d156]
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo15SocketExceptionC2ENS0_4TypeESsiSs+0x30) [0x877e818]
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo6Socket4recvEPci+0x221) [0x88d74b3]
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0x4f) [0x85cef33]
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x242) [0x8771e6e]
m30000| /lib/libpthread.so.0() [0xd5d919]
m30000| /lib/libc.so.6(clone+0x5e) [0xca6d4e]
m30000| Wed Jun 13 22:31:26 [conn3] end connection 127.0.0.1:56968 (6 connections now open)
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo15printStackTraceERSo+0x26) [0x886c49a]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0xd2) [0x8561b5a]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo11DBExceptionC2EPKci+0x54) [0x864d156]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo15SocketExceptionC2ENS0_4TypeESsiSs+0x30) [0x877e818]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo6Socket4recvEPci+0x221) [0x88d74b3]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0x4f) [0x85cef33]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x242) [0x8771e6e]
m30002| /lib/libpthread.so.0() [0xd5d919]
m30002| /lib/libc.so.6(clone+0x5e) [0x1ecd4e]
m30002| Wed Jun 13 22:31:26 [conn3] end connection 127.0.0.1:59379 (5 connections now open)
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo15printStackTraceERSo+0x26) [0x886c49a]
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0xd2) [0x8561b5a]
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo11DBExceptionC2EPKci+0x54) [0x864d156]
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo15SocketExceptionC2ENS0_4TypeESsiSs+0x30) [0x877e818]
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo6Socket4recvEPci+0x221) [0x88d74b3]
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0x4f) [0x85cef33]
m30000| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x242) [0x8771e6e]
m30000| /lib/libpthread.so.0() [0xd5d919]
m30000| /lib/libc.so.6(clone+0x5e) [0xca6d4e]
m30000| Wed Jun 13 22:31:26 [conn5] end connection 127.0.0.1:56975 (5 connections now open)
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo15printStackTraceERSo+0x26) [0x886c49a]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0xd2) [0x8561b5a]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo11DBExceptionC2EPKci+0x54) [0x864d156]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo15SocketExceptionC2ENS0_4TypeESsiSs+0x30) [0x877e818]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo6Socket4recvEPci+0x221) [0x88d74b3]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0x4f) [0x85cef33]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x242) [0x8771e6e]
m30002| /lib/libpthread.so.0() [0xd5d919]
m30002| /lib/libc.so.6(clone+0x5e) [0x1ecd4e]
m30002| Wed Jun 13 22:31:26 [conn5] end connection 127.0.0.1:59393 (4 connections now open)
Wed Jun 13 22:31:27 shell: stopped mongo program on port 30999
m30000| Wed Jun 13 22:31:27 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Wed Jun 13 22:31:27 [interruptThread] now exiting
m30000| Wed Jun 13 22:31:27 dbexit:
m30000| Wed Jun 13 22:31:27 [interruptThread] shutdown: going to close listening sockets...
m30000| Wed Jun 13 22:31:27 [interruptThread] closing listening socket: 14
m30000| Wed Jun 13 22:31:27 [interruptThread] closing listening socket: 15
m30000| Wed Jun 13 22:31:27 [interruptThread] closing listening socket: 16
m30000| Wed Jun 13 22:31:27 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Wed Jun 13 22:31:27 [interruptThread] shutdown: going to flush diaglog...
m30000| Wed Jun 13 22:31:27 [interruptThread] shutdown: going to close sockets...
m30000| Wed Jun 13 22:31:27 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Wed Jun 13 22:31:27 [interruptThread] shutdown: closing all files...
m29000| Wed Jun 13 22:31:27 [conn8] end connection 127.0.0.1:35948 (7 connections now open)
m30001| Wed Jun 13 22:31:27 [conn6] end connection 127.0.0.1:51241 (3 connections now open)
m30002| Wed Jun 13 22:31:27 [conn6] warning: DBException thrown :: caused by :: 9001 socket exception
m30002| 0x886c49a 0x8561b5a 0x864d156 0x877e818 0x88d74b3 0x85cef33 0x8771e6e 0xd5d919 0x1ecd4e
m30000| Wed Jun 13 22:31:27 [interruptThread] closeAllFiles() finished
m30000| Wed Jun 13 22:31:27 [interruptThread] shutdown: removing fs lock...
m30000| Wed Jun 13 22:31:27 dbexit: really exiting now
m29000| Wed Jun 13 22:31:27 [conn7] end connection 127.0.0.1:35947 (6 connections now open)
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo15printStackTraceERSo+0x26) [0x886c49a]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo11DBException13traceIfNeededERKS0_+0xd2) [0x8561b5a]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo11DBExceptionC2EPKci+0x54) [0x864d156]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo15SocketExceptionC2ENS0_4TypeESsiSs+0x30) [0x877e818]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo6Socket4recvEPci+0x221) [0x88d74b3]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0x4f) [0x85cef33]
m30002| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x242) [0x8771e6e]
m30002| /lib/libpthread.so.0() [0xd5d919]
m30002| /lib/libc.so.6(clone+0x5e) [0x1ecd4e]
m30002| Wed Jun 13 22:31:27 [conn6] end connection 127.0.0.1:59394 (3 connections now open)
Wed Jun 13 22:31:28 shell: stopped mongo program on port 30000
m30001| Wed Jun 13 22:31:28 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Wed Jun 13 22:31:28 [interruptThread] now exiting
m30001| Wed Jun 13 22:31:28 dbexit:
m30001| Wed Jun 13 22:31:28 [interruptThread] shutdown: going to close listening sockets...
m30001| Wed Jun 13 22:31:28 [interruptThread] closing listening socket: 17
m30001| Wed Jun 13 22:31:28 [interruptThread] closing listening socket: 18
m30001| Wed Jun 13 22:31:28 [interruptThread] closing listening socket: 20
m30001| Wed Jun 13 22:31:28 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Wed Jun 13 22:31:28 [interruptThread] shutdown: going to flush diaglog...
m30001| Wed Jun 13 22:31:28 [interruptThread] shutdown: going to close sockets...
m30001| Wed Jun 13 22:31:28 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Wed Jun 13 22:31:28 [interruptThread] shutdown: closing all files...
m29000| Wed Jun 13 22:31:28 [conn9] end connection 127.0.0.1:35951 (5 connections now open)
m29000| Wed Jun 13 22:31:28 [conn10] end connection 127.0.0.1:35952 (5 connections now open)
m29000| Wed Jun 13 22:31:28 [conn11] end connection 127.0.0.1:35953 (4 connections now open)
m30001| Wed Jun 13 22:31:28 [interruptThread] closeAllFiles() finished
m30001| Wed Jun 13 22:31:28 [interruptThread] shutdown: removing fs lock...
m30001| Wed Jun 13 22:31:28 dbexit: really exiting now
Wed Jun 13 22:31:29 shell: stopped mongo program on port 30001
m30002| Wed Jun 13 22:31:29 got signal 15 (Terminated), will terminate after current cmd ends
m30002| Wed Jun 13 22:31:29 [interruptThread] now exiting
m30002| Wed Jun 13 22:31:29 dbexit:
m30002| Wed Jun 13 22:31:29 [interruptThread] shutdown: going to close listening sockets...
m30002| Wed Jun 13 22:31:29 [interruptThread] closing listening socket: 20
m30002| Wed Jun 13 22:31:29 [interruptThread] closing listening socket: 21
m30002| Wed Jun 13 22:31:29 [interruptThread] closing listening socket: 24
m30002| Wed Jun 13 22:31:29 [interruptThread] removing socket file: /tmp/mongodb-30002.sock
m30002| Wed Jun 13 22:31:29 [interruptThread] shutdown: going to flush diaglog...
m30002| Wed Jun 13 22:31:29 [interruptThread] shutdown: going to close sockets...
m30002| Wed Jun 13 22:31:29 [interruptThread] shutdown: waiting for fs preallocator...
m30002| Wed Jun 13 22:31:29 [interruptThread] shutdown: closing all files...
m29000| Wed Jun 13 22:31:29 [conn12] end connection 127.0.0.1:35957 (2 connections now open)
m30002| Wed Jun 13 22:31:29 [interruptThread] closeAllFiles() finished
m30002| Wed Jun 13 22:31:29 [interruptThread] shutdown: removing fs lock...
m30002| Wed Jun 13 22:31:29 dbexit: really exiting now
Wed Jun 13 22:31:30 shell: stopped mongo program on port 30002
m29000| Wed Jun 13 22:31:30 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Wed Jun 13 22:31:30 [interruptThread] now exiting
m29000| Wed Jun 13 22:31:30 dbexit:
m29000| Wed Jun 13 22:31:30 [interruptThread] shutdown: going to close listening sockets...
m29000| Wed Jun 13 22:31:30 [interruptThread] closing listening socket: 24
m29000| Wed Jun 13 22:31:30 [interruptThread] closing listening socket: 25
m29000| Wed Jun 13 22:31:30 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Wed Jun 13 22:31:30 [interruptThread] shutdown: going to flush diaglog...
m29000| Wed Jun 13 22:31:30 [interruptThread] shutdown: going to close sockets...
m29000| Wed Jun 13 22:31:30 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Wed Jun 13 22:31:30 [interruptThread] shutdown: closing all files...
m29000| Wed Jun 13 22:31:30 [interruptThread] closeAllFiles() finished
m29000| Wed Jun 13 22:31:30 [interruptThread] shutdown: removing fs lock...
m29000| Wed Jun 13 22:31:30 dbexit: really exiting now
Wed Jun 13 22:31:31 shell: stopped mongo program on port 29000
*** ShardingTest test completed successfully in 9.378 seconds ***
9423.213959ms
Wed Jun 13 22:31:31 [initandlisten] connection accepted from 127.0.0.1:54028 #6 (5 connections now open)
*******************************************
Test : array_shard_key.js ...
Command : /home/yellow/buildslave/Linux_32bit_debug/mongo/mongo --port 27999 --nodb /home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/array_shard_key.js --eval TestData = new Object();TestData.testPath = "/home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/array_shard_key.js";TestData.testFile = "array_shard_key.js";TestData.testName = "array_shard_key";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Wed Jun 13 22:31:31 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/array_shard_key0'
Wed Jun 13 22:31:31 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30000 --dbpath /data/db/array_shard_key0
m30000| Wed Jun 13 22:31:31
m30000| Wed Jun 13 22:31:31 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Wed Jun 13 22:31:31
m30000| Wed Jun 13 22:31:31 [initandlisten] MongoDB starting : pid=10107 port=30000 dbpath=/data/db/array_shard_key0 32-bit host=tp2.10gen.cc
m30000| Wed Jun 13 22:31:31 [initandlisten] _DEBUG build (which is slower)
m30000| Wed Jun 13 22:31:31 [initandlisten]
m30000| Wed Jun 13 22:31:31 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Wed Jun 13 22:31:31 [initandlisten] ** Not recommended for production.
m30000| Wed Jun 13 22:31:31 [initandlisten]
m30000| Wed Jun 13 22:31:31 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Wed Jun 13 22:31:31 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Wed Jun 13 22:31:31 [initandlisten] ** with --journal, the limit is lower
m30000| Wed Jun 13 22:31:31 [initandlisten]
m30000| Wed Jun 13 22:31:31 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Wed Jun 13 22:31:31 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Wed Jun 13 22:31:31 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30000| Wed Jun 13 22:31:31 [initandlisten] options: { dbpath: "/data/db/array_shard_key0", port: 30000 }
m30000| Wed Jun 13 22:31:31 [initandlisten] opening db: local
m30000| Wed Jun 13 22:31:31 [initandlisten] waiting for connections on port 30000
m30000| Wed Jun 13 22:31:31 [websvr] admin web console waiting for connections on port 31000
m30000| Wed Jun 13 22:31:31 [initandlisten] connection accepted from 127.0.0.1:56990 #1 (1 connection now open)
Resetting db path '/data/db/array_shard_key1'
Wed Jun 13 22:31:31 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30001 --dbpath /data/db/array_shard_key1
m30001| Wed Jun 13 22:31:31
m30001| Wed Jun 13 22:31:31 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30001| Wed Jun 13 22:31:31
m30001| Wed Jun 13 22:31:31 [initandlisten] MongoDB starting : pid=10120 port=30001 dbpath=/data/db/array_shard_key1 32-bit host=tp2.10gen.cc
m30001| Wed Jun 13 22:31:31 [initandlisten] _DEBUG build (which is slower)
m30001| Wed Jun 13 22:31:31 [initandlisten]
m30001| Wed Jun 13 22:31:31 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30001| Wed Jun 13 22:31:31 [initandlisten] ** Not recommended for production.
m30001| Wed Jun 13 22:31:31 [initandlisten]
m30001| Wed Jun 13 22:31:31 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30001| Wed Jun 13 22:31:31 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30001| Wed Jun 13 22:31:31 [initandlisten] ** with --journal, the limit is lower
m30001| Wed Jun 13 22:31:31 [initandlisten]
m30001| Wed Jun 13 22:31:31 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30001| Wed Jun 13 22:31:31 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30001| Wed Jun 13 22:31:31 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30001| Wed Jun 13 22:31:31 [initandlisten] options: { dbpath: "/data/db/array_shard_key1", port: 30001 }
m30001| Wed Jun 13 22:31:31 [initandlisten] opening db: local
m30001| Wed Jun 13 22:31:31 [initandlisten] waiting for connections on port 30001
m30001| Wed Jun 13 22:31:31 [websvr] admin web console waiting for connections on port 31001
m30001| Wed Jun 13 22:31:31 [initandlisten] connection accepted from 127.0.0.1:51254 #1 (1 connection now open)
Resetting db path '/data/db/array_shard_key2'
Wed Jun 13 22:31:31 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30002 --dbpath /data/db/array_shard_key2
m30002| Wed Jun 13 22:31:31
m30002| Wed Jun 13 22:31:31 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30002| Wed Jun 13 22:31:31
m30002| Wed Jun 13 22:31:31 [initandlisten] MongoDB starting : pid=10133 port=30002 dbpath=/data/db/array_shard_key2 32-bit host=tp2.10gen.cc
m30002| Wed Jun 13 22:31:31 [initandlisten] _DEBUG build (which is slower)
m30002| Wed Jun 13 22:31:31 [initandlisten]
m30002| Wed Jun 13 22:31:31 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30002| Wed Jun 13 22:31:31 [initandlisten] ** Not recommended for production.
m30002| Wed Jun 13 22:31:31 [initandlisten]
m30002| Wed Jun 13 22:31:31 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30002| Wed Jun 13 22:31:31 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30002| Wed Jun 13 22:31:31 [initandlisten] ** with --journal, the limit is lower
m30002| Wed Jun 13 22:31:31 [initandlisten]
m30002| Wed Jun 13 22:31:31 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30002| Wed Jun 13 22:31:31 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30002| Wed Jun 13 22:31:31 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30002| Wed Jun 13 22:31:31 [initandlisten] options: { dbpath: "/data/db/array_shard_key2", port: 30002 }
m30002| Wed Jun 13 22:31:31 [initandlisten] opening db: local
m30002| Wed Jun 13 22:31:31 [initandlisten] waiting for connections on port 30002
m30002| Wed Jun 13 22:31:31 [websvr] admin web console waiting for connections on port 31002
m30002| Wed Jun 13 22:31:31 [initandlisten] connection accepted from 127.0.0.1:59403 #1 (1 connection now open)
"localhost:30000"
m30000| Wed Jun 13 22:31:31 [initandlisten] connection accepted from 127.0.0.1:56995 #2 (2 connections now open)
m30000| Wed Jun 13 22:31:31 [conn2] opening db: config
ShardingTest array_shard_key :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000,
connection to localhost:30001,
connection to localhost:30002
]
}
m30000| Wed Jun 13 22:31:31 [FileAllocator] allocating new datafile /data/db/array_shard_key0/config.ns, filling with zeroes...
m30000| Wed Jun 13 22:31:31 [FileAllocator] creating directory /data/db/array_shard_key0/_tmp
Wed Jun 13 22:31:31 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos --port 30999 --configdb localhost:30000
m30999| Wed Jun 13 22:31:31 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Wed Jun 13 22:31:31 [mongosMain] MongoS version 2.1.2-pre- starting: pid=10148 port=30999 32-bit host=tp2.10gen.cc (--help for usage)
m30999| Wed Jun 13 22:31:31 [mongosMain] _DEBUG build
m30999| Wed Jun 13 22:31:31 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Wed Jun 13 22:31:31 [mongosMain] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30999| Wed Jun 13 22:31:31 [mongosMain] options: { configdb: "localhost:30000", port: 30999 }
m30000| Wed Jun 13 22:31:31 [initandlisten] connection accepted from 127.0.0.1:56997 #3 (3 connections now open)
m30000| Wed Jun 13 22:31:31 [FileAllocator] done allocating datafile /data/db/array_shard_key0/config.ns, size: 16MB, took 0.041 secs
m30000| Wed Jun 13 22:31:32 [FileAllocator] allocating new datafile /data/db/array_shard_key0/config.0, filling with zeroes...
m30000| Wed Jun 13 22:31:32 [FileAllocator] done allocating datafile /data/db/array_shard_key0/config.0, size: 16MB, took 0.035 secs
m30000| Wed Jun 13 22:31:32 [conn2] datafileheader::init initializing /data/db/array_shard_key0/config.0 n:0
m30000| Wed Jun 13 22:31:32 [FileAllocator] allocating new datafile /data/db/array_shard_key0/config.1, filling with zeroes...
m30000| Wed Jun 13 22:31:32 [conn2] build index config.settings { _id: 1 }
m30000| Wed Jun 13 22:31:32 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:32 [initandlisten] connection accepted from 127.0.0.1:56998 #4 (4 connections now open)
m30000| Wed Jun 13 22:31:32 [initandlisten] connection accepted from 127.0.0.1:56999 #5 (5 connections now open)
m30000| Wed Jun 13 22:31:32 [conn5] build index config.version { _id: 1 }
m30000| Wed Jun 13 22:31:32 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:32 [conn4] build index config.chunks { _id: 1 }
m30999| Wed Jun 13 22:31:32 [mongosMain] waiting for connections on port 30999
m30000| Wed Jun 13 22:31:32 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:32 [conn4] info: creating collection config.chunks on add index
m30000| Wed Jun 13 22:31:32 [conn4] build index config.chunks { ns: 1, min: 1 }
m30000| Wed Jun 13 22:31:32 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:32 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30000| Wed Jun 13 22:31:32 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:32 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Wed Jun 13 22:31:32 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:32 [conn4] build index config.shards { _id: 1 }
m30000| Wed Jun 13 22:31:32 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:32 [conn4] info: creating collection config.shards on add index
m30000| Wed Jun 13 22:31:32 [conn4] build index config.shards { host: 1 }
m30000| Wed Jun 13 22:31:32 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:31:32 [websvr] admin web console waiting for connections on port 31999
m30999| Wed Jun 13 22:31:32 [Balancer] about to contact config servers and shards
m30999| Wed Jun 13 22:31:32 [Balancer] config servers and shards contacted successfully
m30999| Wed Jun 13 22:31:32 [Balancer] balancer id: tp2.10gen.cc:30999 started at Jun 13 22:31:32
m30999| Wed Jun 13 22:31:32 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:31:32 [initandlisten] connection accepted from 127.0.0.1:57000 #6 (6 connections now open)
m30000| Wed Jun 13 22:31:32 [conn5] build index config.mongos { _id: 1 }
m30000| Wed Jun 13 22:31:32 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:31:32 [LockPinger] creating distributed lock ping thread for localhost:30000 and process tp2.10gen.cc:30999:1339644692:1804289383 (sleeping for 30000ms)
m30000| Wed Jun 13 22:31:32 [conn4] build index config.lockpings { _id: 1 }
m30000| Wed Jun 13 22:31:32 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:32 [conn6] build index config.locks { _id: 1 }
m30000| Wed Jun 13 22:31:32 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:32 [conn4] build index config.lockpings { ping: 1 }
m30000| Wed Jun 13 22:31:32 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Wed Jun 13 22:31:32 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644692:1804289383' acquired, ts : 4fd95b14e47ca6d37eed4d87
m30000| Wed Jun 13 22:31:32 [FileAllocator] done allocating datafile /data/db/array_shard_key0/config.1, size: 32MB, took 0.088 secs
m30999| Wed Jun 13 22:31:32 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644692:1804289383' unlocked.
m30999| Wed Jun 13 22:31:32 [mongosMain] connection accepted from 127.0.0.1:50158 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Wed Jun 13 22:31:32 [conn] couldn't find database [admin] in config db
m30000| Wed Jun 13 22:31:32 [conn4] build index config.databases { _id: 1 }
m30000| Wed Jun 13 22:31:32 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:31:32 [conn] put [admin] on: config:localhost:30000
m30999| Wed Jun 13 22:31:32 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
m30001| Wed Jun 13 22:31:32 [initandlisten] connection accepted from 127.0.0.1:51264 #2 (2 connections now open)
m30999| Wed Jun 13 22:31:32 [conn] going to add shard: { _id: "shard0001", host: "localhost:30001" }
{ "shardAdded" : "shard0001", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30002
m30002| Wed Jun 13 22:31:32 [initandlisten] connection accepted from 127.0.0.1:59412 #2 (2 connections now open)
m30999| Wed Jun 13 22:31:32 [conn] going to add shard: { _id: "shard0002", host: "localhost:30002" }
{ "shardAdded" : "shard0002", "ok" : 1 }
m30999| Wed Jun 13 22:31:32 [conn] couldn't find database [array_shard_key] in config db
m30999| Wed Jun 13 22:31:32 [conn] put [array_shard_key] on: shard0001:localhost:30001
m30000| Wed Jun 13 22:31:32 [initandlisten] connection accepted from 127.0.0.1:57004 #7 (7 connections now open)
m30999| Wed Jun 13 22:31:32 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd95b14e47ca6d37eed4d86
m30999| Wed Jun 13 22:31:32 [conn] creating WriteBackListener for: localhost:30001 serverID: 4fd95b14e47ca6d37eed4d86
m30001| Wed Jun 13 22:31:32 [initandlisten] connection accepted from 127.0.0.1:51267 #3 (3 connections now open)
m30002| Wed Jun 13 22:31:32 [initandlisten] connection accepted from 127.0.0.1:59415 #3 (3 connections now open)
m30999| Wed Jun 13 22:31:32 [conn] creating WriteBackListener for: localhost:30002 serverID: 4fd95b14e47ca6d37eed4d86
m30001| Wed Jun 13 22:31:32 [conn3] _DEBUG ReadContext db wasn't open, will try to open array_shard_key.foo
m30001| Wed Jun 13 22:31:32 [conn3] opening db: array_shard_key
m30999| Wed Jun 13 22:31:32 [conn] enabling sharding on: array_shard_key
m30001| Wed Jun 13 22:31:32 [initandlisten] connection accepted from 127.0.0.1:51269 #4 (4 connections now open)
m30999| Wed Jun 13 22:31:32 [conn] CMD: shardcollection: { shardcollection: "array_shard_key.foo", key: { _id: 1.0, i: 1.0 } }
m30999| Wed Jun 13 22:31:32 [conn] enable sharding on: array_shard_key.foo with shard key: { _id: 1.0, i: 1.0 }
m30999| Wed Jun 13 22:31:32 [conn] going to create 1 chunk(s) for: array_shard_key.foo using new epoch 4fd95b14e47ca6d37eed4d88
m30001| Wed Jun 13 22:31:32 [FileAllocator] allocating new datafile /data/db/array_shard_key1/array_shard_key.ns, filling with zeroes...
m30001| Wed Jun 13 22:31:32 [FileAllocator] creating directory /data/db/array_shard_key1/_tmp
m30999| Wed Jun 13 22:31:32 [conn] ChunkManager: time to load chunks for array_shard_key.foo: 0ms sequenceNumber: 2 version: 1|0||4fd95b14e47ca6d37eed4d88 based on: (empty)
m30999| Wed Jun 13 22:31:32 [conn] DEV WARNING appendDate() called with a tiny (but nonzero) date
m30000| Wed Jun 13 22:31:32 [conn4] build index config.collections { _id: 1 }
m30000| Wed Jun 13 22:31:32 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Wed Jun 13 22:31:32 [FileAllocator] done allocating datafile /data/db/array_shard_key1/array_shard_key.ns, size: 16MB, took 0.039 secs
m30001| Wed Jun 13 22:31:32 [FileAllocator] allocating new datafile /data/db/array_shard_key1/array_shard_key.0, filling with zeroes...
m30001| Wed Jun 13 22:31:32 [FileAllocator] done allocating datafile /data/db/array_shard_key1/array_shard_key.0, size: 16MB, took 0.035 secs
m30001| Wed Jun 13 22:31:32 [conn4] datafileheader::init initializing /data/db/array_shard_key1/array_shard_key.0 n:0
m30001| Wed Jun 13 22:31:32 [FileAllocator] allocating new datafile /data/db/array_shard_key1/array_shard_key.1, filling with zeroes...
m30001| Wed Jun 13 22:31:32 [conn4] build index array_shard_key.foo { _id: 1 }
m30001| Wed Jun 13 22:31:32 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Wed Jun 13 22:31:32 [conn4] info: creating collection array_shard_key.foo on add index
m30001| Wed Jun 13 22:31:32 [conn4] build index array_shard_key.foo { _id: 1.0, i: 1.0 }
m30001| Wed Jun 13 22:31:32 [conn4] build index done. scanned 0 total records. 0 secs
m30001| Wed Jun 13 22:31:32 [conn3] no current chunk manager found for this shard, will initialize
m30000| Wed Jun 13 22:31:32 [initandlisten] connection accepted from 127.0.0.1:57008 #8 (8 connections now open)
m30999| Wed Jun 13 22:31:32 [conn] splitting: array_shard_key.foo shard: ns:array_shard_key.foo at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey, i: MinKey } max: { _id: MaxKey, i: MaxKey }
m30000| Wed Jun 13 22:31:32 [initandlisten] connection accepted from 127.0.0.1:57009 #9 (9 connections now open)
m30001| Wed Jun 13 22:31:32 [conn4] received splitChunk request: { splitChunk: "array_shard_key.foo", keyPattern: { _id: 1.0, i: 1.0 }, min: { _id: MinKey, i: MinKey }, max: { _id: MaxKey, i: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 } ], shardId: "array_shard_key.foo-_id_MinKeyi_MinKey", configdb: "localhost:30000" }
m30001| Wed Jun 13 22:31:32 [conn4] created new distributed lock for array_shard_key.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Wed Jun 13 22:31:32 [LockPinger] creating distributed lock ping thread for localhost:30000 and process tp2.10gen.cc:30001:1339644692:935788104 (sleeping for 30000ms)
m30001| Wed Jun 13 22:31:32 [conn4] distributed lock 'array_shard_key.foo/tp2.10gen.cc:30001:1339644692:935788104' acquired, ts : 4fd95b140aa93db09c716bb5
m30001| Wed Jun 13 22:31:32 [conn4] splitChunk accepted at version 1|0||4fd95b14e47ca6d37eed4d88
m30000| Wed Jun 13 22:31:32 [conn9] info PageFaultRetryableSection will not yield, already locked upon reaching
m30001| Wed Jun 13 22:31:32 [conn4] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:32-0", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:51269", time: new Date(1339644692303), what: "split", ns: "array_shard_key.foo", details: { before: { min: { _id: MinKey, i: MinKey }, max: { _id: MaxKey, i: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey, i: MinKey }, max: { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd95b14e47ca6d37eed4d88') }, right: { min: { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd95b14e47ca6d37eed4d88') } } }
m30001| Wed Jun 13 22:31:32 [conn4] distributed lock 'array_shard_key.foo/tp2.10gen.cc:30001:1339644692:935788104' unlocked.
m30999| Wed Jun 13 22:31:32 [conn] ChunkManager: time to load chunks for array_shard_key.foo: 0ms sequenceNumber: 3 version: 1|2||4fd95b14e47ca6d37eed4d88 based on: 1|0||4fd95b14e47ca6d37eed4d88
m30999| Wed Jun 13 22:31:32 [conn] CMD: movechunk: { movechunk: "array_shard_key.foo", find: { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 }, to: "localhost:30000" }
m30999| Wed Jun 13 22:31:32 [conn] moving chunk ns: array_shard_key.foo moving ( ns:array_shard_key.foo at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 } max: { _id: MaxKey, i: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Wed Jun 13 22:31:32 [conn4] received moveChunk request: { moveChunk: "array_shard_key.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "array_shard_key.foo-_id_ObjectId('4fd95b14e1c023902f17694e')i_1.0", configdb: "localhost:30000" }
m30001| Wed Jun 13 22:31:32 [conn4] created new distributed lock for array_shard_key.foo on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Wed Jun 13 22:31:32 [conn4] distributed lock 'array_shard_key.foo/tp2.10gen.cc:30001:1339644692:935788104' acquired, ts : 4fd95b140aa93db09c716bb6
m30001| Wed Jun 13 22:31:32 [conn4] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:32-1", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:51269", time: new Date(1339644692308), what: "moveChunk.start", ns: "array_shard_key.foo", details: { min: { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Wed Jun 13 22:31:32 [conn4] moveChunk request accepted at version 1|2||4fd95b14e47ca6d37eed4d88
m30001| Wed Jun 13 22:31:32 [conn4] moveChunk number of documents: 0
m30000| Wed Jun 13 22:31:32 [conn9] opening db: admin
m30001| Wed Jun 13 22:31:32 [initandlisten] connection accepted from 127.0.0.1:51272 #5 (5 connections now open)
m30000| Wed Jun 13 22:31:32 [migrateThread] opening db: array_shard_key
m30000| Wed Jun 13 22:31:32 [FileAllocator] allocating new datafile /data/db/array_shard_key0/array_shard_key.ns, filling with zeroes...
m30001| Wed Jun 13 22:31:32 [FileAllocator] done allocating datafile /data/db/array_shard_key1/array_shard_key.1, size: 32MB, took 0.161 secs
m30000| Wed Jun 13 22:31:32 [FileAllocator] done allocating datafile /data/db/array_shard_key0/array_shard_key.ns, size: 16MB, took 0.129 secs
m30000| Wed Jun 13 22:31:32 [FileAllocator] allocating new datafile /data/db/array_shard_key0/array_shard_key.0, filling with zeroes...
m30000| Wed Jun 13 22:31:32 [FileAllocator] done allocating datafile /data/db/array_shard_key0/array_shard_key.0, size: 16MB, took 0.038 secs
m30000| Wed Jun 13 22:31:32 [migrateThread] datafileheader::init initializing /data/db/array_shard_key0/array_shard_key.0 n:0
m30000| Wed Jun 13 22:31:32 [FileAllocator] allocating new datafile /data/db/array_shard_key0/array_shard_key.1, filling with zeroes...
m30000| Wed Jun 13 22:31:32 [migrateThread] build index array_shard_key.foo { _id: 1 }
m30000| Wed Jun 13 22:31:32 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:32 [migrateThread] info: creating collection array_shard_key.foo on add index
m30000| Wed Jun 13 22:31:32 [migrateThread] build index array_shard_key.foo { _id: 1.0, i: 1.0 }
m30000| Wed Jun 13 22:31:32 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:32 [migrateThread] migrate commit succeeded flushing to secondaries for 'array_shard_key.foo' { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 } -> { _id: MaxKey, i: MaxKey }
m30000| Wed Jun 13 22:31:32 [FileAllocator] done allocating datafile /data/db/array_shard_key0/array_shard_key.1, size: 32MB, took 0.208 secs
m30001| Wed Jun 13 22:31:33 [conn4] moveChunk data transfer progress: { active: true, ns: "array_shard_key.foo", from: "localhost:30001", min: { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, shardKeyPattern: { _id: 1, i: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Wed Jun 13 22:31:33 [conn4] moveChunk setting version to: 2|0||4fd95b14e47ca6d37eed4d88
m30000| Wed Jun 13 22:31:33 [migrateThread] migrate commit succeeded flushing to secondaries for 'array_shard_key.foo' { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 } -> { _id: MaxKey, i: MaxKey }
m30000| Wed Jun 13 22:31:33 [migrateThread] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:33-0", server: "tp2.10gen.cc", clientAddr: ":27017", time: new Date(1339644693309), what: "moveChunk.to", ns: "array_shard_key.foo", details: { min: { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, step1 of 5: 198, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 801 } }
m30000| Wed Jun 13 22:31:33 [initandlisten] connection accepted from 127.0.0.1:57011 #10 (10 connections now open)
m30001| Wed Jun 13 22:31:33 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "array_shard_key.foo", from: "localhost:30001", min: { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, shardKeyPattern: { _id: 1, i: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Wed Jun 13 22:31:33 [conn4] moveChunk updating self version to: 2|1||4fd95b14e47ca6d37eed4d88 through { _id: MinKey, i: MinKey } -> { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 } for collection 'array_shard_key.foo'
m30001| Wed Jun 13 22:31:33 [conn4] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:33-2", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:51269", time: new Date(1339644693312), what: "moveChunk.commit", ns: "array_shard_key.foo", details: { min: { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Wed Jun 13 22:31:33 [conn4] doing delete inline
m30001| Wed Jun 13 22:31:33 [conn4] moveChunk deleted: 0
m30001| Wed Jun 13 22:31:33 [conn4] distributed lock 'array_shard_key.foo/tp2.10gen.cc:30001:1339644692:935788104' unlocked.
m30001| Wed Jun 13 22:31:33 [conn4] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:33-3", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:51269", time: new Date(1339644693313), what: "moveChunk.from", ns: "array_shard_key.foo", details: { min: { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 3, step6 of 6: 0 } }
m30001| Wed Jun 13 22:31:33 [conn4] command admin.$cmd command: { moveChunk: "array_shard_key.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('4fd95b14e1c023902f17694e'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "array_shard_key.foo-_id_ObjectId('4fd95b14e1c023902f17694e')i_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:577 w:86994 reslen:37 1006ms
m30999| Wed Jun 13 22:31:33 [conn] ChunkManager: time to load chunks for array_shard_key.foo: 1ms sequenceNumber: 4 version: 2|1||4fd95b14e47ca6d37eed4d88 based on: 1|2||4fd95b14e47ca6d37eed4d88
{ "millis" : 1009, "ok" : 1 }
[
{
"_id" : "array_shard_key.foo-_id_MinKeyi_MinKey",
"lastmod" : Timestamp(2000, 1),
"lastmodEpoch" : ObjectId("4fd95b14e47ca6d37eed4d88"),
"ns" : "array_shard_key.foo",
"min" : {
"_id" : { $minKey : 1 },
"i" : { $minKey : 1 }
},
"max" : {
"_id" : ObjectId("4fd95b14e1c023902f17694e"),
"i" : 1
},
"shard" : "shard0001"
},
{
"_id" : "array_shard_key.foo-_id_ObjectId('4fd95b14e1c023902f17694e')i_1.0",
"lastmod" : Timestamp(2000, 0),
"lastmodEpoch" : ObjectId("4fd95b14e47ca6d37eed4d88"),
"ns" : "array_shard_key.foo",
"min" : {
"_id" : ObjectId("4fd95b14e1c023902f17694e"),
"i" : 1
},
"max" : {
"_id" : { $maxKey : 1 },
"i" : { $maxKey : 1 }
},
"shard" : "shard0000"
}
]
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
{ "_id" : "shard0002", "host" : "localhost:30002" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "array_shard_key", "partitioned" : true, "primary" : "shard0001" }
array_shard_key.foo chunks:
shard0001 1
shard0000 1
{ "_id" : { $minKey : 1 }, "i" : { $minKey : 1 } } -->> { "_id" : ObjectId("4fd95b14e1c023902f17694e"), "i" : 1 } on : shard0001 Timestamp(2000, 1)
{ "_id" : ObjectId("4fd95b14e1c023902f17694e"), "i" : 1 } -->> { "_id" : { $maxKey : 1 }, "i" : { $maxKey : 1 } } on : shard0000 Timestamp(2000, 0)
1: insert some invalid data
m30999| Wed Jun 13 22:31:33 [conn] warning: shard key mismatch for insert { _id: ObjectId('4fd95b15e47ca6d37eed4d89'), _id: ObjectId('4fd95b15e1c023902f17694f'), i: [ 1.0, 2.0 ] }, expected values for { _id: 1.0, i: 1.0 }, reloading config data to ensure not stale
m30999| Wed Jun 13 22:31:34 [conn] tried to insert object with no valid shard key for { _id: 1.0, i: 1.0 } : { _id: ObjectId('4fd95b15e47ca6d37eed4d8a'), _id: ObjectId('4fd95b15e1c023902f17694f'), i: [ 1.0, 2.0 ] }
"tried to insert object with no valid shard key for { _id: 1.0, i: 1.0 } : { _id: ObjectId('4fd95b15e47ca6d37eed4d8a'), _id: ObjectId('4fd95b15e1c023902f17694f'), i: [ 1.0, 2.0 ] }"
m30000| Wed Jun 13 22:31:34 [conn7] no current chunk manager found for this shard, will initialize
m30999| range.universal(): 1
m30999| range.universal(): 1
"full shard key must be in update object for collection: array_shard_key.foo"
m30999| range.universal(): 1
"multi-updates require $ops rather than replacement object"
m30999| range.universal(): 1
"cannot modify shard key for collection array_shard_key.foo, found new value for i"
m30999| range.universal(): 1
m30999| range.universal(): 1
m30999| range.universal(): 1
m30999| range.universal(): 1
m30999| range.universal(): 1
m30999| range.universal(): 1
m30999| range.universal(): 1
"Sharding-then-inserting-multikey tested, now trying inserting-then-sharding-multikey"
m30001| Wed Jun 13 22:31:36 [conn3] build index array_shard_key.foo2 { _id: 1 }
m30001| Wed Jun 13 22:31:36 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Wed Jun 13 22:31:36 [conn3] build index array_shard_key.foo2 { _id: 1.0, i: 1.0 }
m30001| Wed Jun 13 22:31:36 [conn3] build index done. scanned 10 total records. 0 secs
{ "ok" : 0, "errmsg" : "couldn't find valid index for shard key" }
assert failed
Error("Printing Stack Trace")@:0
()@src/mongo/shell/utils.js:37
("assert failed")@src/mongo/shell/utils.js:58
(false)@src/mongo/shell/utils.js:66
([object DBCollection],[object Object],[object Object])@src/mongo/shell/shardingtest.js:866
@/home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/array_shard_key.js:102
Correctly threw error on sharding with multikey index.
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
{ "_id" : "shard0002", "host" : "localhost:30002" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "array_shard_key", "partitioned" : true, "primary" : "shard0001" }
array_shard_key.foo chunks:
shard0001 1
shard0000 1
{ "_id" : { $minKey : 1 }, "i" : { $minKey : 1 } } -->> { "_id" : ObjectId("4fd95b14e1c023902f17694e"), "i" : 1 } on : shard0001 Timestamp(2000, 1)
{ "_id" : ObjectId("4fd95b14e1c023902f17694e"), "i" : 1 } -->> { "_id" : { $maxKey : 1 }, "i" : { $maxKey : 1 } } on : shard0000 Timestamp(2000, 0)
m30001| Wed Jun 13 22:31:36 [conn3] build index array_shard_key.foo23 { _id: 1 }
m30001| Wed Jun 13 22:31:36 [conn3] build index done. scanned 0 total records. 0 secs
m30001| Wed Jun 13 22:31:36 [conn3] build index array_shard_key.foo23 { _id: 1.0, i: 1.0 }
m30001| Wed Jun 13 22:31:36 [conn3] build index done. scanned 10 total records. 0 secs
m30999| Wed Jun 13 22:31:36 [conn] CMD: shardcollection: { shardcollection: "array_shard_key.foo23", key: { _id: 1.0, i: 1.0 } }
m30999| Wed Jun 13 22:31:36 [conn] enable sharding on: array_shard_key.foo23 with shard key: { _id: 1.0, i: 1.0 }
m30999| Wed Jun 13 22:31:36 [conn] going to create 1 chunk(s) for: array_shard_key.foo23 using new epoch 4fd95b18e47ca6d37eed4d8b
m30999| Wed Jun 13 22:31:36 [conn] ChunkManager: time to load chunks for array_shard_key.foo23: 0ms sequenceNumber: 5 version: 1|0||4fd95b18e47ca6d37eed4d8b based on: (empty)
m30001| Wed Jun 13 22:31:36 [conn3] no current chunk manager found for this shard, will initialize
m30999| Wed Jun 13 22:31:36 [conn] splitting: array_shard_key.foo23 shard: ns:array_shard_key.foo23 at: shard0001:localhost:30001 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey, i: MinKey } max: { _id: MaxKey, i: MaxKey }
m30001| Wed Jun 13 22:31:36 [conn4] received splitChunk request: { splitChunk: "array_shard_key.foo23", keyPattern: { _id: 1.0, i: 1.0 }, min: { _id: MinKey, i: MinKey }, max: { _id: MaxKey, i: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 } ], shardId: "array_shard_key.foo23-_id_MinKeyi_MinKey", configdb: "localhost:30000" }
m30001| Wed Jun 13 22:31:36 [conn4] created new distributed lock for array_shard_key.foo23 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Wed Jun 13 22:31:36 [conn4] distributed lock 'array_shard_key.foo23/tp2.10gen.cc:30001:1339644692:935788104' acquired, ts : 4fd95b180aa93db09c716bb7
m30001| Wed Jun 13 22:31:36 [conn4] splitChunk accepted at version 1|0||4fd95b18e47ca6d37eed4d8b
m30001| Wed Jun 13 22:31:36 [conn4] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:36-4", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:51269", time: new Date(1339644696499), what: "split", ns: "array_shard_key.foo23", details: { before: { min: { _id: MinKey, i: MinKey }, max: { _id: MaxKey, i: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey, i: MinKey }, max: { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd95b18e47ca6d37eed4d8b') }, right: { min: { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd95b18e47ca6d37eed4d8b') } } }
m30001| Wed Jun 13 22:31:36 [conn4] distributed lock 'array_shard_key.foo23/tp2.10gen.cc:30001:1339644692:935788104' unlocked.
m30999| Wed Jun 13 22:31:36 [conn] ChunkManager: time to load chunks for array_shard_key.foo23: 0ms sequenceNumber: 6 version: 1|2||4fd95b18e47ca6d37eed4d8b based on: 1|0||4fd95b18e47ca6d37eed4d8b
m30999| Wed Jun 13 22:31:36 [conn] CMD: movechunk: { movechunk: "array_shard_key.foo23", find: { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 }, to: "localhost:30000" }
m30999| Wed Jun 13 22:31:36 [conn] moving chunk ns: array_shard_key.foo23 moving ( ns:array_shard_key.foo23 at: shard0001:localhost:30001 lastmod: 1|2||000000000000000000000000 min: { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 } max: { _id: MaxKey, i: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000
m30001| Wed Jun 13 22:31:36 [conn4] received moveChunk request: { moveChunk: "array_shard_key.foo23", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "array_shard_key.foo23-_id_ObjectId('4fd95b18e1c023902f176968')i_1.0", configdb: "localhost:30000" }
m30001| Wed Jun 13 22:31:36 [conn4] created new distributed lock for array_shard_key.foo23 on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30001| Wed Jun 13 22:31:36 [conn4] distributed lock 'array_shard_key.foo23/tp2.10gen.cc:30001:1339644692:935788104' acquired, ts : 4fd95b180aa93db09c716bb8
m30001| Wed Jun 13 22:31:36 [conn4] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:36-5", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:51269", time: new Date(1339644696507), what: "moveChunk.start", ns: "array_shard_key.foo23", details: { min: { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Wed Jun 13 22:31:36 [conn4] moveChunk request accepted at version 1|2||4fd95b18e47ca6d37eed4d8b
m30001| Wed Jun 13 22:31:36 [conn4] moveChunk number of documents: 0
m30000| Wed Jun 13 22:31:36 [migrateThread] build index array_shard_key.foo23 { _id: 1 }
m30000| Wed Jun 13 22:31:36 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:36 [migrateThread] info: creating collection array_shard_key.foo23 on add index
m30000| Wed Jun 13 22:31:36 [migrateThread] build index array_shard_key.foo23 { _id: 1.0, i: 1.0 }
m30000| Wed Jun 13 22:31:36 [migrateThread] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:31:36 [migrateThread] migrate commit succeeded flushing to secondaries for 'array_shard_key.foo23' { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 } -> { _id: MaxKey, i: MaxKey }
m30001| Wed Jun 13 22:31:37 [conn4] moveChunk data transfer progress: { active: true, ns: "array_shard_key.foo23", from: "localhost:30001", min: { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, shardKeyPattern: { _id: 1, i: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30001| Wed Jun 13 22:31:37 [conn4] moveChunk setting version to: 2|0||4fd95b18e47ca6d37eed4d8b
m30000| Wed Jun 13 22:31:37 [migrateThread] migrate commit succeeded flushing to secondaries for 'array_shard_key.foo23' { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 } -> { _id: MaxKey, i: MaxKey }
m30000| Wed Jun 13 22:31:37 [migrateThread] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:37-1", server: "tp2.10gen.cc", clientAddr: ":27017", time: new Date(1339644697516), what: "moveChunk.to", ns: "array_shard_key.foo23", details: { min: { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, step1 of 5: 2, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1004 } }
m30001| Wed Jun 13 22:31:37 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "array_shard_key.foo23", from: "localhost:30001", min: { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, shardKeyPattern: { _id: 1, i: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
m30001| Wed Jun 13 22:31:37 [conn4] moveChunk updating self version to: 2|1||4fd95b18e47ca6d37eed4d8b through { _id: MinKey, i: MinKey } -> { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 } for collection 'array_shard_key.foo23'
m30001| Wed Jun 13 22:31:37 [conn4] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:37-6", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:51269", time: new Date(1339644697518), what: "moveChunk.commit", ns: "array_shard_key.foo23", details: { min: { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, from: "shard0001", to: "shard0000" } }
m30001| Wed Jun 13 22:31:37 [conn4] doing delete inline
m30001| Wed Jun 13 22:31:37 [conn4] moveChunk deleted: 0
m30001| Wed Jun 13 22:31:37 [conn4] distributed lock 'array_shard_key.foo23/tp2.10gen.cc:30001:1339644692:935788104' unlocked.
m30001| Wed Jun 13 22:31:37 [conn4] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:31:37-7", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:51269", time: new Date(1339644697520), what: "moveChunk.from", ns: "array_shard_key.foo23", details: { min: { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 9, step6 of 6: 0 } }
m30001| Wed Jun 13 22:31:37 [conn4] command admin.$cmd command: { moveChunk: "array_shard_key.foo23", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('4fd95b18e1c023902f176968'), i: 1.0 }, max: { _id: MaxKey, i: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "array_shard_key.foo23-_id_ObjectId('4fd95b18e1c023902f176968')i_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) r:2065 w:87190 reslen:37 1014ms
m30999| Wed Jun 13 22:31:37 [conn] ChunkManager: time to load chunks for array_shard_key.foo23: 1ms sequenceNumber: 7 version: 2|1||4fd95b18e47ca6d37eed4d8b based on: 1|2||4fd95b18e47ca6d37eed4d8b
{ "millis" : 1017, "ok" : 1 }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:30001" }
{ "_id" : "shard0002", "host" : "localhost:30002" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "array_shard_key", "partitioned" : true, "primary" : "shard0001" }
array_shard_key.foo chunks:
shard0001 1
shard0000 1
{ "_id" : { $minKey : 1 }, "i" : { $minKey : 1 } } -->> { "_id" : ObjectId("4fd95b14e1c023902f17694e"), "i" : 1 } on : shard0001 Timestamp(2000, 1)
{ "_id" : ObjectId("4fd95b14e1c023902f17694e"), "i" : 1 } -->> { "_id" : { $maxKey : 1 }, "i" : { $maxKey : 1 } } on : shard0000 Timestamp(2000, 0)
array_shard_key.foo23 chunks:
shard0001 1
shard0000 1
{ "_id" : { $minKey : 1 }, "i" : { $minKey : 1 } } -->> { "_id" : ObjectId("4fd95b18e1c023902f176968"), "i" : 1 } on : shard0001 Timestamp(2000, 1)
{ "_id" : ObjectId("4fd95b18e1c023902f176968"), "i" : 1 } -->> { "_id" : { $maxKey : 1 }, "i" : { $maxKey : 1 } } on : shard0000 Timestamp(2000, 0)
m30999| Wed Jun 13 22:31:37 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m30000| Wed Jun 13 22:31:37 [conn3] end connection 127.0.0.1:56997 (9 connections now open)
m30000| Wed Jun 13 22:31:37 [conn6] end connection 127.0.0.1:57000 (9 connections now open)
m30000| Wed Jun 13 22:31:37 [conn4] end connection 127.0.0.1:56998 (7 connections now open)
m30000| Wed Jun 13 22:31:37 [conn7] end connection 127.0.0.1:57004 (6 connections now open)
m30001| Wed Jun 13 22:31:37 [conn3] end connection 127.0.0.1:51267 (4 connections now open)
m30002| Wed Jun 13 22:31:37 [conn3] end connection 127.0.0.1:59415 (2 connections now open)
m30001| Wed Jun 13 22:31:37 [conn4] end connection 127.0.0.1:51269 (3 connections now open)
Wed Jun 13 22:31:38 shell: stopped mongo program on port 30999
m30000| Wed Jun 13 22:31:38 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Wed Jun 13 22:31:38 [interruptThread] now exiting
m30000| Wed Jun 13 22:31:38 dbexit:
m30000| Wed Jun 13 22:31:38 [interruptThread] shutdown: going to close listening sockets...
m30000| Wed Jun 13 22:31:38 [interruptThread] closing listening socket: 15
m30000| Wed Jun 13 22:31:38 [interruptThread] closing listening socket: 16
m30000| Wed Jun 13 22:31:38 [interruptThread] closing listening socket: 18
m30000| Wed Jun 13 22:31:38 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Wed Jun 13 22:31:38 [interruptThread] shutdown: going to flush diaglog...
m30000| Wed Jun 13 22:31:38 [interruptThread] shutdown: going to close sockets...
m30000| Wed Jun 13 22:31:38 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Wed Jun 13 22:31:38 [interruptThread] shutdown: closing all files...
m30001| Wed Jun 13 22:31:38 [conn5] end connection 127.0.0.1:51272 (2 connections now open)
m30000| Wed Jun 13 22:31:38 [conn10] end connection 127.0.0.1:57011 (5 connections now open)
m30000| Wed Jun 13 22:31:38 [interruptThread] closeAllFiles() finished
m30000| Wed Jun 13 22:31:38 [interruptThread] shutdown: removing fs lock...
m30000| Wed Jun 13 22:31:38 dbexit: really exiting now
Wed Jun 13 22:31:39 shell: stopped mongo program on port 30000
m30001| Wed Jun 13 22:31:39 got signal 15 (Terminated), will terminate after current cmd ends
m30001| Wed Jun 13 22:31:39 [interruptThread] now exiting
m30001| Wed Jun 13 22:31:39 dbexit:
m30001| Wed Jun 13 22:31:39 [interruptThread] shutdown: going to close listening sockets...
m30001| Wed Jun 13 22:31:39 [interruptThread] closing listening socket: 18
m30001| Wed Jun 13 22:31:39 [interruptThread] closing listening socket: 19
m30001| Wed Jun 13 22:31:39 [interruptThread] closing listening socket: 21
m30001| Wed Jun 13 22:31:39 [interruptThread] removing socket file: /tmp/mongodb-30001.sock
m30001| Wed Jun 13 22:31:39 [interruptThread] shutdown: going to flush diaglog...
m30001| Wed Jun 13 22:31:39 [interruptThread] shutdown: going to close sockets...
m30001| Wed Jun 13 22:31:39 [interruptThread] shutdown: waiting for fs preallocator...
m30001| Wed Jun 13 22:31:39 [interruptThread] shutdown: closing all files...
m30001| Wed Jun 13 22:31:39 [interruptThread] closeAllFiles() finished
m30001| Wed Jun 13 22:31:39 [interruptThread] shutdown: removing fs lock...
m30001| Wed Jun 13 22:31:39 dbexit: really exiting now
Wed Jun 13 22:31:40 shell: stopped mongo program on port 30001
m30002| Wed Jun 13 22:31:40 got signal 15 (Terminated), will terminate after current cmd ends
m30002| Wed Jun 13 22:31:40 [interruptThread] now exiting
m30002| Wed Jun 13 22:31:40 dbexit:
m30002| Wed Jun 13 22:31:40 [interruptThread] shutdown: going to close listening sockets...
m30002| Wed Jun 13 22:31:40 [interruptThread] closing listening socket: 21
m30002| Wed Jun 13 22:31:40 [interruptThread] closing listening socket: 22
m30002| Wed Jun 13 22:31:40 [interruptThread] closing listening socket: 23
m30002| Wed Jun 13 22:31:40 [interruptThread] removing socket file: /tmp/mongodb-30002.sock
m30002| Wed Jun 13 22:31:40 [interruptThread] shutdown: going to flush diaglog...
m30002| Wed Jun 13 22:31:40 [interruptThread] shutdown: going to close sockets...
m30002| Wed Jun 13 22:31:40 [interruptThread] shutdown: waiting for fs preallocator...
m30002| Wed Jun 13 22:31:40 [interruptThread] shutdown: closing all files...
m30002| Wed Jun 13 22:31:40 [interruptThread] closeAllFiles() finished
m30002| Wed Jun 13 22:31:40 [interruptThread] shutdown: removing fs lock...
m30002| Wed Jun 13 22:31:40 dbexit: really exiting now
Wed Jun 13 22:31:41 shell: stopped mongo program on port 30002
*** ShardingTest array_shard_key completed successfully in 10.229 seconds ***
10274.147987ms
Wed Jun 13 22:31:41 [initandlisten] connection accepted from 127.0.0.1:54052 #7 (6 connections now open)
*******************************************
Test : auth.js ...
Command : /home/yellow/buildslave/Linux_32bit_debug/mongo/mongo --port 27999 --nodb /home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/auth.js --eval TestData = new Object();TestData.testPath = "/home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/auth.js";TestData.testFile = "auth.js";TestData.testName = "auth";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Wed Jun 13 22:31:41 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/auth1-config0'
Wed Jun 13 22:31:41 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 29000 --dbpath /data/db/auth1-config0 --keyFile jstests/libs/key1
m29000| Wed Jun 13 22:31:41
m29000| Wed Jun 13 22:31:41 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m29000| Wed Jun 13 22:31:41
m29000| Wed Jun 13 22:31:41 [initandlisten] MongoDB starting : pid=10198 port=29000 dbpath=/data/db/auth1-config0 32-bit host=tp2.10gen.cc
m29000| Wed Jun 13 22:31:41 [initandlisten] _DEBUG build (which is slower)
m29000| Wed Jun 13 22:31:41 [initandlisten]
m29000| Wed Jun 13 22:31:41 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m29000| Wed Jun 13 22:31:41 [initandlisten] ** Not recommended for production.
m29000| Wed Jun 13 22:31:41 [initandlisten]
m29000| Wed Jun 13 22:31:41 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m29000| Wed Jun 13 22:31:41 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m29000| Wed Jun 13 22:31:41 [initandlisten] ** with --journal, the limit is lower
m29000| Wed Jun 13 22:31:41 [initandlisten]
m29000| Wed Jun 13 22:31:41 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m29000| Wed Jun 13 22:31:41 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m29000| Wed Jun 13 22:31:41 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m29000| Wed Jun 13 22:31:41 [initandlisten] options: { dbpath: "/data/db/auth1-config0", keyFile: "jstests/libs/key1", port: 29000 }
m29000| Wed Jun 13 22:31:41 [initandlisten] opening db: local
m29000| Wed Jun 13 22:31:41 [initandlisten] opening db: admin
m29000| Wed Jun 13 22:31:41 [initandlisten] waiting for connections on port 29000
m29000| Wed Jun 13 22:31:41 [websvr] admin web console waiting for connections on port 30000
m29000| Wed Jun 13 22:31:41 [initandlisten] connection accepted from 127.0.0.1:35984 #1 (1 connection now open)
m29000| Wed Jun 13 22:31:41 [conn1] note: no users configured in admin.system.users, allowing localhost access
"tp2.10gen.cc:29000"
m29000| Wed Jun 13 22:31:41 [initandlisten] connection accepted from 184.173.149.242:55426 #2 (2 connections now open)
ShardingTest auth1 :
{ "config" : "tp2.10gen.cc:29000", "shards" : [ ] }
Wed Jun 13 22:31:41 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos --port 30999 --configdb tp2.10gen.cc:29000 --keyFile jstests/libs/key1
m30999| Wed Jun 13 22:31:41 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Wed Jun 13 22:31:41 [mongosMain] MongoS version 2.1.2-pre- starting: pid=10213 port=30999 32-bit host=tp2.10gen.cc (--help for usage)
m30999| Wed Jun 13 22:31:41 [mongosMain] _DEBUG build
m30999| Wed Jun 13 22:31:41 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Wed Jun 13 22:31:41 [mongosMain] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30999| Wed Jun 13 22:31:41 [mongosMain] options: { configdb: "tp2.10gen.cc:29000", keyFile: "jstests/libs/key1", port: 30999 }
m29000| Wed Jun 13 22:31:41 [initandlisten] connection accepted from 184.173.149.242:55428 #3 (3 connections now open)
m29000| Wed Jun 13 22:31:41 [conn3] authenticate db: local { authenticate: 1, nonce: "ad13705dddb0c16d", user: "__system", key: "feaa192a176c57837065501f444ceb45" }
m29000| Wed Jun 13 22:31:41 [conn3] opening db: config
m29000| Wed Jun 13 22:31:41 [initandlisten] connection accepted from 184.173.149.242:55429 #4 (4 connections now open)
m29000| Wed Jun 13 22:31:41 [initandlisten] connection accepted from 184.173.149.242:55430 #5 (5 connections now open)
m29000| Wed Jun 13 22:31:41 [conn4] authenticate db: local { authenticate: 1, nonce: "37a5948c709405a0", user: "__system", key: "1dffe60f5460f110f7beaccea6b81257" }
m29000| Wed Jun 13 22:31:41 [conn5] authenticate db: local { authenticate: 1, nonce: "d7c38a7923989d25", user: "__system", key: "e90a3d6d0a2026d6020c91d32373f259" }
m29000| Wed Jun 13 22:31:41 [FileAllocator] allocating new datafile /data/db/auth1-config0/config.ns, filling with zeroes...
m29000| Wed Jun 13 22:31:41 [FileAllocator] creating directory /data/db/auth1-config0/_tmp
m29000| Wed Jun 13 22:31:41 [FileAllocator] done allocating datafile /data/db/auth1-config0/config.ns, size: 16MB, took 0.038 secs
m29000| Wed Jun 13 22:31:41 [FileAllocator] allocating new datafile /data/db/auth1-config0/config.0, filling with zeroes...
m29000| Wed Jun 13 22:31:41 [FileAllocator] done allocating datafile /data/db/auth1-config0/config.0, size: 16MB, took 0.039 secs
m29000| Wed Jun 13 22:31:41 [conn5] datafileheader::init initializing /data/db/auth1-config0/config.0 n:0
m29000| Wed Jun 13 22:31:41 [FileAllocator] allocating new datafile /data/db/auth1-config0/config.1, filling with zeroes...
m29000| Wed Jun 13 22:31:41 [conn5] build index config.version { _id: 1 }
m29000| Wed Jun 13 22:31:41 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:31:41 [websvr] admin web console waiting for connections on port 31999
m29000| Wed Jun 13 22:31:41 [conn3] build index config.settings { _id: 1 }
m30999| Wed Jun 13 22:31:41 [mongosMain] waiting for connections on port 30999
m30999| Wed Jun 13 22:31:41 [Balancer] about to contact config servers and shards
m29000| Wed Jun 13 22:31:41 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:31:41 [Balancer] config servers and shards contacted successfully
m30999| Wed Jun 13 22:31:41 [Balancer] balancer id: tp2.10gen.cc:30999 started at Jun 13 22:31:41
m30999| Wed Jun 13 22:31:41 [Balancer] created new distributed lock for balancer on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Wed Jun 13 22:31:41 [conn3] build index config.chunks { _id: 1 }
m29000| Wed Jun 13 22:31:41 [initandlisten] connection accepted from 184.173.149.242:55431 #6 (6 connections now open)
m29000| Wed Jun 13 22:31:41 [conn6] authenticate db: local { authenticate: 1, nonce: "7d2dedfca35f6b1d", user: "__system", key: "dd5f44f0eed9690c4d09cfe53ea35d77" }
m29000| Wed Jun 13 22:31:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:41 [conn3] info: creating collection config.chunks on add index
m29000| Wed Jun 13 22:31:41 [conn3] build index config.chunks { ns: 1, min: 1 }
m29000| Wed Jun 13 22:31:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:41 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
m29000| Wed Jun 13 22:31:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:41 [conn5] build index config.mongos { _id: 1 }
m29000| Wed Jun 13 22:31:41 [conn5] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:41 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
m29000| Wed Jun 13 22:31:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:41 [conn3] build index config.shards { _id: 1 }
m29000| Wed Jun 13 22:31:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:41 [conn3] info: creating collection config.shards on add index
m29000| Wed Jun 13 22:31:41 [conn3] build index config.shards { host: 1 }
m29000| Wed Jun 13 22:31:41 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:31:41 [LockPinger] creating distributed lock ping thread for tp2.10gen.cc:29000 and process tp2.10gen.cc:30999:1339644701:1804289383 (sleeping for 30000ms)
m29000| Wed Jun 13 22:31:41 [conn3] build index config.lockpings { _id: 1 }
m29000| Wed Jun 13 22:31:41 [conn3] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:41 [conn6] build index config.locks { _id: 1 }
m29000| Wed Jun 13 22:31:41 [conn6] build index done. scanned 0 total records. 0 secs
m29000| Wed Jun 13 22:31:41 [conn3] build index config.lockpings { ping: 1 }
m29000| Wed Jun 13 22:31:41 [conn3] build index done. scanned 1 total records. 0 secs
m30999| Wed Jun 13 22:31:41 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' acquired, ts : 4fd95b1d93454f4c315250f6
m30999| Wed Jun 13 22:31:41 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' unlocked.
m29000| Wed Jun 13 22:31:41 [FileAllocator] done allocating datafile /data/db/auth1-config0/config.1, size: 32MB, took 0.073 secs
m30999| Wed Jun 13 22:31:42 [mongosMain] connection accepted from 127.0.0.1:50178 #1 (1 connection now open)
logging in first, if there was an unclean shutdown the user might already exist
m30999| Wed Jun 13 22:31:42 [conn] couldn't find database [admin] in config db
m29000| Wed Jun 13 22:31:42 [conn3] build index config.databases { _id: 1 }
m29000| Wed Jun 13 22:31:42 [conn3] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:31:42 [conn] put [admin] on: config:tp2.10gen.cc:29000
m30999| Wed Jun 13 22:31:42 [conn] authenticate db: admin { authenticate: 1.0, user: "foo", nonce: "7dc25a994f16f539", key: "4f493df9f0a0ca48bae448071798571d" }
m30999| Wed Jun 13 22:31:42 [conn] auth: couldn't find user foo, admin.system.users
{ "ok" : 0, "errmsg" : "auth fails" }
m29000| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 184.173.149.242:55433 #7 (7 connections now open)
m29000| Wed Jun 13 22:31:42 [conn7] authenticate db: local { authenticate: 1, nonce: "7bd19a410eea36ac", user: "__system", key: "1112f0f2fe2b65082880c856055eeaaf" }
m30999| Wed Jun 13 22:31:42 [conn] creating WriteBackListener for: tp2.10gen.cc:29000 serverID: 4fd95b1d93454f4c315250f5
m30999| Wed Jun 13 22:31:42 [conn] note: no users configured in admin.system.users, allowing localhost access
adding user
{
"user" : "foo",
"readOnly" : false,
"pwd" : "3563025c1e89c7ad43fb63fcbcf1c3c6",
"_id" : ObjectId("4fd95b1ee204cf4c84a13ae1")
}
m29000| Wed Jun 13 22:31:42 [FileAllocator] allocating new datafile /data/db/auth1-config0/admin.ns, filling with zeroes...
m29000| Wed Jun 13 22:31:42 [FileAllocator] done allocating datafile /data/db/auth1-config0/admin.ns, size: 16MB, took 0.045 secs
m29000| Wed Jun 13 22:31:42 [FileAllocator] allocating new datafile /data/db/auth1-config0/admin.0, filling with zeroes...
m29000| Wed Jun 13 22:31:42 [FileAllocator] done allocating datafile /data/db/auth1-config0/admin.0, size: 16MB, took 0.036 secs
m29000| Wed Jun 13 22:31:42 [conn7] datafileheader::init initializing /data/db/auth1-config0/admin.0 n:0
m29000| Wed Jun 13 22:31:42 [FileAllocator] allocating new datafile /data/db/auth1-config0/admin.1, filling with zeroes...
m29000| Wed Jun 13 22:31:42 [conn7] build index admin.system.users { _id: 1 }
m29000| Wed Jun 13 22:31:42 [conn7] build index done. scanned 0 total records. 0.023 secs
m29000| Wed Jun 13 22:31:42 [conn7] insert admin.system.users keyUpdates:0 locks(micros) W:470 r:461 w:113530 113ms
m30999| Wed Jun 13 22:31:42 [conn] authenticate db: admin { authenticate: 1.0, user: "foo", nonce: "6a7e4f3eeb9787a4", key: "5503deac4d4368a22d9d5674093d7bc4" }
{ "dbname" : "admin", "user" : "foo", "readOnly" : false, "ok" : 1 }
{
"singleShard" : "tp2.10gen.cc:29000",
"updatedExisting" : true,
"n" : 1,
"connectionId" : 7,
"err" : null,
"ok" : 1
}
[ { "_id" : "chunksize", "value" : 1 } ]
restart mongos
Wed Jun 13 22:31:42 No db started on port: 31000
Wed Jun 13 22:31:42 shell: stopped mongo program on port 31000
Wed Jun 13 22:31:42 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos --port 31000 --configdb tp2.10gen.cc:29000 --keyFile jstests/libs/key1 --chunkSize 1
m31000| Wed Jun 13 22:31:42 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m31000| Wed Jun 13 22:31:42 [mongosMain] MongoS version 2.1.2-pre- starting: pid=10235 port=31000 32-bit host=tp2.10gen.cc (--help for usage)
m31000| Wed Jun 13 22:31:42 [mongosMain] _DEBUG build
m31000| Wed Jun 13 22:31:42 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31000| Wed Jun 13 22:31:42 [mongosMain] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31000| Wed Jun 13 22:31:42 [mongosMain] options: { chunkSize: 1, configdb: "tp2.10gen.cc:29000", keyFile: "jstests/libs/key1", port: 31000 }
m29000| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 184.173.149.242:55435 #8 (8 connections now open)
m29000| Wed Jun 13 22:31:42 [conn8] authenticate db: local { authenticate: 1, nonce: "431adbd5caed8fbe", user: "__system", key: "d4b0c712e16ce5a4ddc7a55a24f54953" }
m29000| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 184.173.149.242:55436 #9 (9 connections now open)
m31000| Wed Jun 13 22:31:42 [websvr] admin web console waiting for connections on port 32000
m31000| Wed Jun 13 22:31:42 [mongosMain] waiting for connections on port 31000
m29000| Wed Jun 13 22:31:42 [conn9] authenticate db: local { authenticate: 1, nonce: "ff5af35a23186ecd", user: "__system", key: "f6cb27b9c1fa076f7d62339d0200027e" }
m31000| Wed Jun 13 22:31:42 [Balancer] about to contact config servers and shards
m29000| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 184.173.149.242:55437 #10 (10 connections now open)
m29000| Wed Jun 13 22:31:42 [conn10] authenticate db: local { authenticate: 1, nonce: "86fab2357fe1dbc1", user: "__system", key: "fca60a21b4fdcf22dc0c01a7e2212657" }
m31000| Wed Jun 13 22:31:42 [Balancer] config servers and shards contacted successfully
m31000| Wed Jun 13 22:31:42 [Balancer] balancer id: tp2.10gen.cc:31000 started at Jun 13 22:31:42
m31000| Wed Jun 13 22:31:42 [Balancer] created new distributed lock for balancer on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m29000| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 184.173.149.242:55438 #11 (11 connections now open)
m29000| Wed Jun 13 22:31:42 [conn11] authenticate db: local { authenticate: 1, nonce: "eae273f0d7975c1d", user: "__system", key: "4cda6c49cfaaacdc4e35c06486c0d034" }
m31000| Wed Jun 13 22:31:42 [LockPinger] creating distributed lock ping thread for tp2.10gen.cc:29000 and process tp2.10gen.cc:31000:1339644702:1804289383 (sleeping for 30000ms)
m31000| Wed Jun 13 22:31:42 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' acquired, ts : 4fd95b1ee24b46bcab13cf41
m31000| Wed Jun 13 22:31:42 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' unlocked.
m29000| Wed Jun 13 22:31:42 [FileAllocator] done allocating datafile /data/db/auth1-config0/admin.1, size: 32MB, took 0.067 secs
m31000| Wed Jun 13 22:31:42 [mongosMain] connection accepted from 127.0.0.1:35552 #1 (1 connection now open)
m31000| Wed Jun 13 22:31:42 [conn] authenticate db: admin { authenticate: 1.0, user: "foo", nonce: "2dc4756164d3f805", key: "df04f3795a15d5c695ec2272f339e6e2" }
{ "dbname" : "admin", "user" : "foo", "readOnly" : false, "ok" : 1 }
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key2",
"port" : 31100,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "d1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d1-0'
Wed Jun 13 22:31:42 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key2 --port 31100 --noprealloc --smallfiles --rest --replSet d1 --dbpath /data/db/d1-0
m31100| note: noprealloc may hurt performance in many applications
m31100| Wed Jun 13 22:31:42
m31100| Wed Jun 13 22:31:42 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31100| Wed Jun 13 22:31:42
m31100| Wed Jun 13 22:31:42 [initandlisten] MongoDB starting : pid=10254 port=31100 dbpath=/data/db/d1-0 32-bit host=tp2.10gen.cc
m31100| Wed Jun 13 22:31:42 [initandlisten] _DEBUG build (which is slower)
m31100| Wed Jun 13 22:31:42 [initandlisten]
m31100| Wed Jun 13 22:31:42 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31100| Wed Jun 13 22:31:42 [initandlisten] ** Not recommended for production.
m31100| Wed Jun 13 22:31:42 [initandlisten]
m31100| Wed Jun 13 22:31:42 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31100| Wed Jun 13 22:31:42 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31100| Wed Jun 13 22:31:42 [initandlisten] ** with --journal, the limit is lower
m31100| Wed Jun 13 22:31:42 [initandlisten]
m31100| Wed Jun 13 22:31:42 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31100| Wed Jun 13 22:31:42 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31100| Wed Jun 13 22:31:42 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31100| Wed Jun 13 22:31:42 [initandlisten] options: { dbpath: "/data/db/d1-0", keyFile: "jstests/libs/key2", noprealloc: true, oplogSize: 40, port: 31100, replSet: "d1", rest: true, smallfiles: true }
m31100| Wed Jun 13 22:31:42 [initandlisten] opening db: admin
m31100| Wed Jun 13 22:31:42 [initandlisten] waiting for connections on port 31100
m31100| Wed Jun 13 22:31:42 [websvr] admin web console waiting for connections on port 32100
m31100| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 184.173.149.242:42788 #1 (1 connection now open)
m31100| Wed Jun 13 22:31:42 [conn1] authenticate db: local { authenticate: 1, nonce: "ad59eebe27acb413", user: "__system", key: "2d4fc801c372de2cd4de55bdfd1c91dd" }
m31100| Wed Jun 13 22:31:42 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31100| Wed Jun 13 22:31:42 [conn1] opening db: local
m31100| Wed Jun 13 22:31:42 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31100| Wed Jun 13 22:31:42 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31100| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 127.0.0.1:50879 #2 (2 connections now open)
m31100| Wed Jun 13 22:31:42 [conn2] note: no users configured in admin.system.users, allowing localhost access
[ connection to tp2.10gen.cc:31100 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key2",
"port" : 31101,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "d1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d1-1'
Wed Jun 13 22:31:42 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key2 --port 31101 --noprealloc --smallfiles --rest --replSet d1 --dbpath /data/db/d1-1
m31101| note: noprealloc may hurt performance in many applications
m31101| Wed Jun 13 22:31:42
m31101| Wed Jun 13 22:31:42 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31101| Wed Jun 13 22:31:42
m31101| Wed Jun 13 22:31:42 [initandlisten] MongoDB starting : pid=10270 port=31101 dbpath=/data/db/d1-1 32-bit host=tp2.10gen.cc
m31101| Wed Jun 13 22:31:42 [initandlisten] _DEBUG build (which is slower)
m31101| Wed Jun 13 22:31:42 [initandlisten]
m31101| Wed Jun 13 22:31:42 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31101| Wed Jun 13 22:31:42 [initandlisten] ** Not recommended for production.
m31101| Wed Jun 13 22:31:42 [initandlisten]
m31101| Wed Jun 13 22:31:42 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31101| Wed Jun 13 22:31:42 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31101| Wed Jun 13 22:31:42 [initandlisten] ** with --journal, the limit is lower
m31101| Wed Jun 13 22:31:42 [initandlisten]
m31101| Wed Jun 13 22:31:42 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31101| Wed Jun 13 22:31:42 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31101| Wed Jun 13 22:31:42 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31101| Wed Jun 13 22:31:42 [initandlisten] options: { dbpath: "/data/db/d1-1", keyFile: "jstests/libs/key2", noprealloc: true, oplogSize: 40, port: 31101, replSet: "d1", rest: true, smallfiles: true }
m31101| Wed Jun 13 22:31:42 [initandlisten] opening db: admin
m31101| Wed Jun 13 22:31:42 [initandlisten] waiting for connections on port 31101
m31101| Wed Jun 13 22:31:42 [websvr] admin web console waiting for connections on port 32101
m31101| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 184.173.149.242:56500 #1 (1 connection now open)
m31101| Wed Jun 13 22:31:42 [conn1] authenticate db: local { authenticate: 1, nonce: "77be790f906f32e6", user: "__system", key: "e5128214b09a49b6cac42276da38c1db" }
m31101| Wed Jun 13 22:31:42 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31101| Wed Jun 13 22:31:42 [conn1] opening db: local
m31101| Wed Jun 13 22:31:42 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31101| Wed Jun 13 22:31:42 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31101| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 127.0.0.1:55190 #2 (2 connections now open)
m31101| Wed Jun 13 22:31:42 [conn2] note: no users configured in admin.system.users, allowing localhost access
[ connection to tp2.10gen.cc:31100, connection to tp2.10gen.cc:31101 ]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key2",
"port" : 31102,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "d1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d1-2'
Wed Jun 13 22:31:42 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key2 --port 31102 --noprealloc --smallfiles --rest --replSet d1 --dbpath /data/db/d1-2
m31102| note: noprealloc may hurt performance in many applications
m31102| Wed Jun 13 22:31:42
m31102| Wed Jun 13 22:31:42 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31102| Wed Jun 13 22:31:42
m31102| Wed Jun 13 22:31:42 [initandlisten] MongoDB starting : pid=10286 port=31102 dbpath=/data/db/d1-2 32-bit host=tp2.10gen.cc
m31102| Wed Jun 13 22:31:42 [initandlisten] _DEBUG build (which is slower)
m31102| Wed Jun 13 22:31:42 [initandlisten]
m31102| Wed Jun 13 22:31:42 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31102| Wed Jun 13 22:31:42 [initandlisten] ** Not recommended for production.
m31102| Wed Jun 13 22:31:42 [initandlisten]
m31102| Wed Jun 13 22:31:42 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31102| Wed Jun 13 22:31:42 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31102| Wed Jun 13 22:31:42 [initandlisten] ** with --journal, the limit is lower
m31102| Wed Jun 13 22:31:42 [initandlisten]
m31102| Wed Jun 13 22:31:42 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31102| Wed Jun 13 22:31:42 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31102| Wed Jun 13 22:31:42 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31102| Wed Jun 13 22:31:42 [initandlisten] options: { dbpath: "/data/db/d1-2", keyFile: "jstests/libs/key2", noprealloc: true, oplogSize: 40, port: 31102, replSet: "d1", rest: true, smallfiles: true }
m31102| Wed Jun 13 22:31:42 [initandlisten] opening db: admin
m31102| Wed Jun 13 22:31:42 [initandlisten] waiting for connections on port 31102
m31102| Wed Jun 13 22:31:42 [websvr] admin web console waiting for connections on port 32102
m31102| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 184.173.149.242:52661 #1 (1 connection now open)
m31102| Wed Jun 13 22:31:42 [conn1] authenticate db: local { authenticate: 1, nonce: "da9516a8ed8339f5", user: "__system", key: "beb87d034efb540649e6b350b94f68c8" }
m31102| Wed Jun 13 22:31:42 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31102| Wed Jun 13 22:31:42 [conn1] opening db: local
m31102| Wed Jun 13 22:31:42 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31102| Wed Jun 13 22:31:42 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31102| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 127.0.0.1:34118 #2 (2 connections now open)
m31102| Wed Jun 13 22:31:42 [conn2] note: no users configured in admin.system.users, allowing localhost access
[
connection to tp2.10gen.cc:31100,
connection to tp2.10gen.cc:31101,
connection to tp2.10gen.cc:31102
]
{
"replSetInitiate" : {
"_id" : "d1",
"members" : [
{
"_id" : 0,
"host" : "tp2.10gen.cc:31100"
},
{
"_id" : 1,
"host" : "tp2.10gen.cc:31101"
},
{
"_id" : 2,
"host" : "tp2.10gen.cc:31102"
}
]
}
}
m31100| Wed Jun 13 22:31:42 [conn2] replSet replSetInitiate admin command received from client
m31100| Wed Jun 13 22:31:42 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31101| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 184.173.149.242:56505 #3 (3 connections now open)
m31101| Wed Jun 13 22:31:42 [conn3] authenticate db: local { authenticate: 1, nonce: "acfeddc40647d724", user: "__system", key: "cf6a4e51856f598b4df042dc98dc46ae" }
m31102| Wed Jun 13 22:31:42 [initandlisten] connection accepted from 184.173.149.242:52664 #3 (3 connections now open)
m31102| Wed Jun 13 22:31:43 [conn3] authenticate db: local { authenticate: 1, nonce: "33e1e2208448e0c2", user: "__system", key: "fdeeabd3ab0f65f81640dba1bec9a6a9" }
m31100| Wed Jun 13 22:31:43 [conn2] replSet replSetInitiate all members seem up
m31100| Wed Jun 13 22:31:43 [conn2] ******
m31100| Wed Jun 13 22:31:43 [conn2] creating replication oplog of size: 40MB...
m31100| Wed Jun 13 22:31:43 [FileAllocator] allocating new datafile /data/db/d1-0/local.ns, filling with zeroes...
m31100| Wed Jun 13 22:31:43 [FileAllocator] creating directory /data/db/d1-0/_tmp
m31100| Wed Jun 13 22:31:43 [FileAllocator] done allocating datafile /data/db/d1-0/local.ns, size: 16MB, took 0.034 secs
m31100| Wed Jun 13 22:31:43 [FileAllocator] allocating new datafile /data/db/d1-0/local.0, filling with zeroes...
m31100| Wed Jun 13 22:31:43 [FileAllocator] done allocating datafile /data/db/d1-0/local.0, size: 64MB, took 0.124 secs
m31100| Wed Jun 13 22:31:43 [conn2] datafileheader::init initializing /data/db/d1-0/local.0 n:0
m31100| Wed Jun 13 22:31:43 [conn2] ******
m31100| Wed Jun 13 22:31:43 [conn2] replSet info saving a newer config version to local.system.replset
m31100| Wed Jun 13 22:31:43 [conn2] replSet saveConfigLocally done
m31100| Wed Jun 13 22:31:43 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31100| Wed Jun 13 22:31:43 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "d1", members: [ { _id: 0.0, host: "tp2.10gen.cc:31100" }, { _id: 1.0, host: "tp2.10gen.cc:31101" }, { _id: 2.0, host: "tp2.10gen.cc:31102" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:179253 r:146 w:71 reslen:112 180ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
initiated
m30999| Wed Jun 13 22:31:51 [Balancer] MaxChunkSize changing from 64MB to 1MB
m30999| Wed Jun 13 22:31:51 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' acquired, ts : 4fd95b2793454f4c315250f7
m30999| Wed Jun 13 22:31:51 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' unlocked.
m31000| Wed Jun 13 22:31:52 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' acquired, ts : 4fd95b28e24b46bcab13cf42
m31000| Wed Jun 13 22:31:52 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' unlocked.
m31100| Wed Jun 13 22:31:52 [rsStart] replSet load config ok from self
m31100| Wed Jun 13 22:31:52 [rsStart] replSet I am tp2.10gen.cc:31100
m31100| Wed Jun 13 22:31:52 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31102
m31100| Wed Jun 13 22:31:52 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31101
m31100| Wed Jun 13 22:31:52 [rsStart] replSet STARTUP2
m31100| Wed Jun 13 22:31:52 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is up
m31100| Wed Jun 13 22:31:52 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is up
m31100| Wed Jun 13 22:31:52 [rsSync] replSet SECONDARY
m31101| Wed Jun 13 22:31:52 [rsStart] trying to contact tp2.10gen.cc:31100
m31100| Wed Jun 13 22:31:52 [initandlisten] connection accepted from 184.173.149.242:42798 #3 (3 connections now open)
m31100| Wed Jun 13 22:31:52 [conn3] authenticate db: local { authenticate: 1, nonce: "dc159fdc129e487", user: "__system", key: "b51ba7690b93e8794207cbfcf7d214d5" }
m31101| Wed Jun 13 22:31:52 [rsStart] replSet load config ok from tp2.10gen.cc:31100
m31101| Wed Jun 13 22:31:52 [rsStart] replSet I am tp2.10gen.cc:31101
m31101| Wed Jun 13 22:31:52 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31102
m31101| Wed Jun 13 22:31:52 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31100
m31101| Wed Jun 13 22:31:52 [rsStart] replSet got config version 1 from a remote, saving locally
m31101| Wed Jun 13 22:31:52 [rsStart] replSet info saving a newer config version to local.system.replset
m31101| Wed Jun 13 22:31:52 [FileAllocator] allocating new datafile /data/db/d1-1/local.ns, filling with zeroes...
m31101| Wed Jun 13 22:31:52 [FileAllocator] creating directory /data/db/d1-1/_tmp
m31101| Wed Jun 13 22:31:52 [FileAllocator] done allocating datafile /data/db/d1-1/local.ns, size: 16MB, took 0.037 secs
m31101| Wed Jun 13 22:31:52 [FileAllocator] allocating new datafile /data/db/d1-1/local.0, filling with zeroes...
m31101| Wed Jun 13 22:31:52 [FileAllocator] done allocating datafile /data/db/d1-1/local.0, size: 16MB, took 0.034 secs
m31101| Wed Jun 13 22:31:52 [rsStart] datafileheader::init initializing /data/db/d1-1/local.0 n:0
m31101| Wed Jun 13 22:31:52 [rsStart] replSet saveConfigLocally done
m31101| Wed Jun 13 22:31:52 [rsStart] replSet STARTUP2
m31101| Wed Jun 13 22:31:52 [rsSync] ******
m31101| Wed Jun 13 22:31:52 [rsSync] creating replication oplog of size: 40MB...
m31101| Wed Jun 13 22:31:52 [FileAllocator] allocating new datafile /data/db/d1-1/local.1, filling with zeroes...
m31101| Wed Jun 13 22:31:52 [FileAllocator] done allocating datafile /data/db/d1-1/local.1, size: 64MB, took 0.119 secs
m31101| Wed Jun 13 22:31:52 [rsSync] datafileheader::init initializing /data/db/d1-1/local.1 n:1
m31102| Wed Jun 13 22:31:52 [rsStart] trying to contact tp2.10gen.cc:31100
m31100| Wed Jun 13 22:31:52 [initandlisten] connection accepted from 184.173.149.242:42799 #4 (4 connections now open)
m31100| Wed Jun 13 22:31:52 [conn4] authenticate db: local { authenticate: 1, nonce: "7351fbfb2ee49cb1", user: "__system", key: "f55286272e7e27454b9b34f4e8a93158" }
m31102| Wed Jun 13 22:31:52 [rsStart] replSet load config ok from tp2.10gen.cc:31100
m31102| Wed Jun 13 22:31:52 [rsStart] replSet I am tp2.10gen.cc:31102
m31102| Wed Jun 13 22:31:52 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31101
m31102| Wed Jun 13 22:31:52 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31100
m31102| Wed Jun 13 22:31:52 [rsStart] replSet got config version 1 from a remote, saving locally
m31102| Wed Jun 13 22:31:52 [rsStart] replSet info saving a newer config version to local.system.replset
m31102| Wed Jun 13 22:31:52 [FileAllocator] allocating new datafile /data/db/d1-2/local.ns, filling with zeroes...
m31102| Wed Jun 13 22:31:52 [FileAllocator] creating directory /data/db/d1-2/_tmp
m31101| Wed Jun 13 22:31:52 [rsSync] ******
m31101| Wed Jun 13 22:31:52 [rsSync] replSet initial sync pending
m31101| Wed Jun 13 22:31:52 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31102| Wed Jun 13 22:31:52 [FileAllocator] done allocating datafile /data/db/d1-2/local.ns, size: 16MB, took 0.036 secs
m31102| Wed Jun 13 22:31:52 [FileAllocator] allocating new datafile /data/db/d1-2/local.0, filling with zeroes...
m31102| Wed Jun 13 22:31:52 [FileAllocator] done allocating datafile /data/db/d1-2/local.0, size: 16MB, took 0.034 secs
m31102| Wed Jun 13 22:31:52 [rsStart] datafileheader::init initializing /data/db/d1-2/local.0 n:0
m31102| Wed Jun 13 22:31:52 [rsStart] replSet saveConfigLocally done
m31102| Wed Jun 13 22:31:52 [rsStart] replSet STARTUP2
m31102| Wed Jun 13 22:31:52 [rsSync] ******
m31102| Wed Jun 13 22:31:52 [rsSync] creating replication oplog of size: 40MB...
m31102| Wed Jun 13 22:31:52 [FileAllocator] allocating new datafile /data/db/d1-2/local.1, filling with zeroes...
m31102| Wed Jun 13 22:31:53 [FileAllocator] done allocating datafile /data/db/d1-2/local.1, size: 64MB, took 0.193 secs
m31102| Wed Jun 13 22:31:53 [rsSync] datafileheader::init initializing /data/db/d1-2/local.1 n:1
m31102| Wed Jun 13 22:31:53 [rsSync] ******
m31102| Wed Jun 13 22:31:53 [rsSync] replSet initial sync pending
m31102| Wed Jun 13 22:31:53 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31100| Wed Jun 13 22:31:54 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state STARTUP2
m31100| Wed Jun 13 22:31:54 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state STARTUP2
m31100| Wed Jun 13 22:31:54 [rsMgr] not electing self, tp2.10gen.cc:31102 would veto
m31100| Wed Jun 13 22:31:54 [rsMgr] not electing self, tp2.10gen.cc:31102 would veto
m31101| Wed Jun 13 22:31:54 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is up
m31101| Wed Jun 13 22:31:54 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state SECONDARY
m31102| Wed Jun 13 22:31:54 [initandlisten] connection accepted from 184.173.149.242:52667 #4 (4 connections now open)
m31102| Wed Jun 13 22:31:54 [conn4] authenticate db: local { authenticate: 1, nonce: "bc73a85fb213c8ec", user: "__system", key: "67a8addca82eed8820c5a790ee2f27f2" }
m31101| Wed Jun 13 22:31:54 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is up
m31101| Wed Jun 13 22:31:54 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state STARTUP2
m31102| Wed Jun 13 22:31:54 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is up
m31101| Wed Jun 13 22:31:54 [initandlisten] connection accepted from 184.173.149.242:56510 #4 (4 connections now open)
m31102| Wed Jun 13 22:31:54 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state SECONDARY
m31101| Wed Jun 13 22:31:54 [conn4] authenticate db: local { authenticate: 1, nonce: "dd5db9e1ced19204", user: "__system", key: "93c788010d30b2173549d5a00820fc90" }
m31102| Wed Jun 13 22:31:54 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is up
m31102| Wed Jun 13 22:31:54 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state STARTUP2
m31100| Wed Jun 13 22:32:00 [rsMgr] replSet info electSelf 0
m31102| Wed Jun 13 22:32:00 [conn3] replSet received elect msg { replSetElect: 1, set: "d1", who: "tp2.10gen.cc:31100", whoid: 0, cfgver: 1, round: ObjectId('4fd95b30f44ec9900bb67e5e') }
m31102| Wed Jun 13 22:32:00 [conn3] replSet RECOVERING
m31101| Wed Jun 13 22:32:00 [conn3] replSet received elect msg { replSetElect: 1, set: "d1", who: "tp2.10gen.cc:31100", whoid: 0, cfgver: 1, round: ObjectId('4fd95b30f44ec9900bb67e5e') }
m31101| Wed Jun 13 22:32:00 [conn3] replSet RECOVERING
m31101| Wed Jun 13 22:32:00 [conn3] replSet info voting yea for tp2.10gen.cc:31100 (0)
m31102| Wed Jun 13 22:32:00 [conn3] replSet info voting yea for tp2.10gen.cc:31100 (0)
m31100| Wed Jun 13 22:32:00 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95b30f44ec9900bb67e5e'), ok: 1.0 }
m31100| Wed Jun 13 22:32:00 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95b30f44ec9900bb67e5e'), ok: 1.0 }
m31100| Wed Jun 13 22:32:00 [rsMgr] info PageFaultRetryableSection will not yield, already locked upon reaching
m31100| Wed Jun 13 22:32:00 [rsMgr] replSet PRIMARY
m31101| Wed Jun 13 22:32:00 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state PRIMARY
m31101| Wed Jun 13 22:32:00 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state RECOVERING
m31102| Wed Jun 13 22:32:00 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state PRIMARY
m31102| Wed Jun 13 22:32:00 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state RECOVERING
adding shard w/out auth d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m29000| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:55455 #12 (12 connections now open)
m29000| Wed Jun 13 22:32:01 [conn12] authenticate db: local { authenticate: 1, nonce: "775f36199bf11227", user: "__system", key: "03ab17beec6104b0ae8d990e5faae0f4" }
m31000| Wed Jun 13 22:32:01 [conn] creating WriteBackListener for: tp2.10gen.cc:29000 serverID: 4fd95b1ee24b46bcab13cf40
{
"note" : "need to authorized on db: admin for command: addShard",
"ok" : 0,
"errmsg" : "unauthorized"
}
m31000| Wed Jun 13 22:32:01 [conn] authenticate db: admin { authenticate: 1.0, user: "foo", nonce: "ed4edf48f6565bcf", key: "ef54916c67c9895aeb88934487b3ef8c" }
{ "dbname" : "admin", "user" : "foo", "readOnly" : false, "ok" : 1 }
adding shard w/wrong key d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m30999| Wed Jun 13 22:32:01 [conn] starting new replica set monitor for replica set d1 with seed of tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m30999| Wed Jun 13 22:32:01 [conn] successfully connected to seed tp2.10gen.cc:31100 for replica set d1
m31100| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:42803 #5 (5 connections now open)
m30999| Wed Jun 13 22:32:01 [conn] changing hosts to { 0: "tp2.10gen.cc:31100", 1: "tp2.10gen.cc:31102", 2: "tp2.10gen.cc:31101" } from d1/
m30999| Wed Jun 13 22:32:01 [conn] trying to add new host tp2.10gen.cc:31100 to replica set d1
m30999| Wed Jun 13 22:32:01 [conn] successfully connected to new host tp2.10gen.cc:31100 in replica set d1
m31100| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:42804 #6 (6 connections now open)
m30999| Wed Jun 13 22:32:01 [conn] trying to add new host tp2.10gen.cc:31101 to replica set d1
m30999| Wed Jun 13 22:32:01 [conn] successfully connected to new host tp2.10gen.cc:31101 in replica set d1
m31101| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:56514 #5 (5 connections now open)
m30999| Wed Jun 13 22:32:01 [conn] trying to add new host tp2.10gen.cc:31102 to replica set d1
m31102| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:52673 #5 (5 connections now open)
m30999| Wed Jun 13 22:32:01 [conn] successfully connected to new host tp2.10gen.cc:31102 in replica set d1
m31100| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:42807 #7 (7 connections now open)
m31100| Wed Jun 13 22:32:01 [conn7] authenticate db: local { authenticate: 1, nonce: "e5a855a6a64b45b3", user: "__system", key: "ed8364ac805df7466125023ec1e8daaf" }
m31100| Wed Jun 13 22:32:01 [conn7] auth: key mismatch __system, ns:local
m31100| Wed Jun 13 22:32:01 [conn7] end connection 184.173.149.242:42807 (6 connections now open)
m31100| Wed Jun 13 22:32:01 [conn5] end connection 184.173.149.242:42803 (5 connections now open)
m31100| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:42808 #8 (6 connections now open)
m31100| Wed Jun 13 22:32:01 [conn8] authenticate db: local { authenticate: 1, nonce: "2ba971568508f836", user: "__system", key: "8a01b08afb438b1e19ebc901e4291d73" }
m31100| Wed Jun 13 22:32:01 [conn8] auth: key mismatch __system, ns:local
m31100| Wed Jun 13 22:32:01 [conn8] end connection 184.173.149.242:42808 (5 connections now open)
m30999| Wed Jun 13 22:32:01 [conn] Primary for replica set d1 changed to tp2.10gen.cc:31100
m31100| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:42809 #9 (6 connections now open)
m31100| Wed Jun 13 22:32:01 [conn9] authenticate db: local { authenticate: 1, nonce: "6dfbdfcca3e793d", user: "__system", key: "52d0569017214eaa7b8c5fe5f2140d54" }
m31100| Wed Jun 13 22:32:01 [conn9] auth: key mismatch __system, ns:local
m31100| Wed Jun 13 22:32:01 [conn9] end connection 184.173.149.242:42809 (5 connections now open)
m31101| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:56519 #6 (6 connections now open)
m31101| Wed Jun 13 22:32:01 [conn6] authenticate db: local { authenticate: 1, nonce: "e13642f0e26959af", user: "__system", key: "18feb926ed0582d4ff6ea8335365c346" }
m31101| Wed Jun 13 22:32:01 [conn6] auth: key mismatch __system, ns:local
m31101| Wed Jun 13 22:32:01 [conn6] end connection 184.173.149.242:56519 (5 connections now open)
m31102| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:52678 #6 (6 connections now open)
m31102| Wed Jun 13 22:32:01 [conn6] authenticate db: local { authenticate: 1, nonce: "9a48c9d2938d998", user: "__system", key: "5e171e71fcd1ae3038307ed94b87d528" }
m31102| Wed Jun 13 22:32:01 [conn6] auth: key mismatch __system, ns:local
m31102| Wed Jun 13 22:32:01 [conn6] end connection 184.173.149.242:52678 (5 connections now open)
m30999| Wed Jun 13 22:32:01 [conn] replica set monitor for replica set d1 started, address is d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m30999| Wed Jun 13 22:32:01 [ReplicaSetMonitorWatcher] starting
m31100| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:42812 #10 (6 connections now open)
m31100| Wed Jun 13 22:32:01 [conn10] authenticate db: local { authenticate: 1, nonce: "6636b5e7881271e1", user: "__system", key: "d3b8ecd9f364520c24ee3fa01c2d0b7d" }
m31100| Wed Jun 13 22:32:01 [conn10] auth: key mismatch __system, ns:local
m31100| Wed Jun 13 22:32:01 [conn10] end connection 184.173.149.242:42812 (5 connections now open)
m31100| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:42813 #11 (6 connections now open)
m31100| Wed Jun 13 22:32:01 [conn11] authenticate db: local { authenticate: 1, nonce: "9c4f4d01b26e3e94", user: "__system", key: "bbad10adf09a6089c5986d6c301c7afc" }
m31100| Wed Jun 13 22:32:01 [conn11] auth: key mismatch __system, ns:local
m31100| Wed Jun 13 22:32:01 [conn11] end connection 184.173.149.242:42813 (5 connections now open)
m31100| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:42814 #12 (6 connections now open)
m31100| Wed Jun 13 22:32:01 [initandlisten] connection accepted from 184.173.149.242:42815 #13 (7 connections now open)
m31100| Wed Jun 13 22:32:01 [conn13] authenticate db: local { authenticate: 1, nonce: "74e774b9085f1cee", user: "__system", key: "2afd033fc7108646624e4787cec2f7fa" }
m31100| Wed Jun 13 22:32:01 [conn13] auth: key mismatch __system, ns:local
m31100| Wed Jun 13 22:32:01 [conn13] end connection 184.173.149.242:42815 (6 connections now open)
m31100| Wed Jun 13 22:32:01 [conn12] authenticate db: local { authenticate: 1, nonce: "552bba66de634745", user: "__system", key: "735c1f189ce99a61ae5f76495983b391" }
m31100| Wed Jun 13 22:32:01 [conn12] auth: key mismatch __system, ns:local
m31100| Wed Jun 13 22:32:01 [conn12] end connection 184.173.149.242:42814 (5 connections now open)
m30999| Wed Jun 13 22:32:01 [conn] deleting replica set monitor for: d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31100| Wed Jun 13 22:32:01 [conn6] end connection 184.173.149.242:42804 (4 connections now open)
m31101| Wed Jun 13 22:32:01 [conn5] end connection 184.173.149.242:56514 (4 connections now open)
m31102| Wed Jun 13 22:32:01 [conn5] end connection 184.173.149.242:52673 (4 connections now open)
m30999| Wed Jun 13 22:32:01 [conn] addshard request { addShard: "d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102" } failed: couldn't connect to new shard can't authenticate to shard server
"command { \"addShard\" : \"d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102\" } failed: {\n\t\"ok\" : 0,\n\t\"errmsg\" : \"couldn't connect to new shard can't authenticate to shard server\"\n}"
start rs w/correct key
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
ReplSetTest stop *** Shutting down mongod in port 31100 ***
m31100| Wed Jun 13 22:32:01 got signal 15 (Terminated), will terminate after current cmd ends
m31100| Wed Jun 13 22:32:01 [interruptThread] now exiting
m31100| Wed Jun 13 22:32:01 dbexit:
m31100| Wed Jun 13 22:32:01 [interruptThread] shutdown: going to close listening sockets...
m31100| Wed Jun 13 22:32:01 [interruptThread] closing listening socket: 27
m31100| Wed Jun 13 22:32:01 [interruptThread] closing listening socket: 30
m31100| Wed Jun 13 22:32:01 [interruptThread] closing listening socket: 31
m31100| Wed Jun 13 22:32:01 [interruptThread] removing socket file: /tmp/mongodb-31100.sock
m31100| Wed Jun 13 22:32:01 [interruptThread] shutdown: going to flush diaglog...
m31100| Wed Jun 13 22:32:01 [interruptThread] shutdown: going to close sockets...
m31100| Wed Jun 13 22:32:01 [interruptThread] shutdown: waiting for fs preallocator...
m31100| Wed Jun 13 22:32:01 [interruptThread] shutdown: closing all files...
m31101| Wed Jun 13 22:32:01 [conn3] end connection 184.173.149.242:56505 (3 connections now open)
m31102| Wed Jun 13 22:32:01 [conn3] end connection 184.173.149.242:52664 (3 connections now open)
m31100| Wed Jun 13 22:32:01 [conn1] end connection 184.173.149.242:42788 (3 connections now open)
m31100| Wed Jun 13 22:32:01 [interruptThread] closeAllFiles() finished
m31100| Wed Jun 13 22:32:01 [interruptThread] shutdown: removing fs lock...
m31100| Wed Jun 13 22:32:01 dbexit: really exiting now
m30999| Wed Jun 13 22:32:01 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' acquired, ts : 4fd95b3193454f4c315250f8
m30999| Wed Jun 13 22:32:01 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' unlocked.
m31000| Wed Jun 13 22:32:02 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' acquired, ts : 4fd95b32e24b46bcab13cf43
m31000| Wed Jun 13 22:32:02 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' unlocked.
Wed Jun 13 22:32:02 shell: stopped mongo program on port 31100
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
ReplSetTest stop *** Shutting down mongod in port 31101 ***
m31101| Wed Jun 13 22:32:02 got signal 15 (Terminated), will terminate after current cmd ends
m31101| Wed Jun 13 22:32:02 [interruptThread] now exiting
m31101| Wed Jun 13 22:32:02 dbexit:
m31101| Wed Jun 13 22:32:02 [interruptThread] shutdown: going to close listening sockets...
m31101| Wed Jun 13 22:32:02 [interruptThread] closing listening socket: 31
m31101| Wed Jun 13 22:32:02 [interruptThread] closing listening socket: 32
m31101| Wed Jun 13 22:32:02 [interruptThread] closing listening socket: 34
m31101| Wed Jun 13 22:32:02 [interruptThread] removing socket file: /tmp/mongodb-31101.sock
m31101| Wed Jun 13 22:32:02 [interruptThread] shutdown: going to flush diaglog...
m31101| Wed Jun 13 22:32:02 [interruptThread] shutdown: going to close sockets...
m31101| Wed Jun 13 22:32:02 [interruptThread] shutdown: waiting for fs preallocator...
m31101| Wed Jun 13 22:32:02 [interruptThread] shutdown: closing all files...
m31102| Wed Jun 13 22:32:02 [conn4] end connection 184.173.149.242:52667 (2 connections now open)
m31101| Wed Jun 13 22:32:02 [conn1] end connection 184.173.149.242:56500 (2 connections now open)
m31101| Wed Jun 13 22:32:02 [interruptThread] closeAllFiles() finished
m31101| Wed Jun 13 22:32:02 [interruptThread] shutdown: removing fs lock...
m31101| Wed Jun 13 22:32:02 dbexit: really exiting now
m31102| Wed Jun 13 22:32:02 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Wed Jun 13 22:32:02 [rsHealthPoll] replSet info tp2.10gen.cc:31100 is down (or slow to respond): DBClientBase::findN: transport error: tp2.10gen.cc:31100 ns: admin.$cmd query: { replSetHeartbeat: "d1", v: 1, pv: 1, checkEmpty: false, from: "tp2.10gen.cc:31102" }
m31102| Wed Jun 13 22:32:02 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state DOWN
m31102| Wed Jun 13 22:32:02 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Wed Jun 13 22:32:02 [rsHealthPoll] replSet info tp2.10gen.cc:31101 is down (or slow to respond): DBClientBase::findN: transport error: tp2.10gen.cc:31101 ns: admin.$cmd query: { replSetHeartbeat: "d1", v: 1, pv: 1, checkEmpty: false, from: "tp2.10gen.cc:31102" }
m31102| Wed Jun 13 22:32:02 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state DOWN
Wed Jun 13 22:32:03 shell: stopped mongo program on port 31101
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
ReplSetTest stop *** Shutting down mongod in port 31102 ***
m31102| Wed Jun 13 22:32:03 got signal 15 (Terminated), will terminate after current cmd ends
m31102| Wed Jun 13 22:32:03 [interruptThread] now exiting
m31102| Wed Jun 13 22:32:03 dbexit:
m31102| Wed Jun 13 22:32:03 [interruptThread] shutdown: going to close listening sockets...
m31102| Wed Jun 13 22:32:03 [interruptThread] closing listening socket: 34
m31102| Wed Jun 13 22:32:03 [interruptThread] closing listening socket: 39
m31102| Wed Jun 13 22:32:03 [interruptThread] closing listening socket: 40
m31102| Wed Jun 13 22:32:03 [interruptThread] removing socket file: /tmp/mongodb-31102.sock
m31102| Wed Jun 13 22:32:03 [interruptThread] shutdown: going to flush diaglog...
m31102| Wed Jun 13 22:32:03 [interruptThread] shutdown: going to close sockets...
m31102| Wed Jun 13 22:32:03 [interruptThread] shutdown: waiting for fs preallocator...
m31102| Wed Jun 13 22:32:03 [interruptThread] shutdown: closing all files...
m31102| Wed Jun 13 22:32:03 [conn1] end connection 184.173.149.242:52661 (1 connection now open)
m31102| Wed Jun 13 22:32:03 [interruptThread] closeAllFiles() finished
m31102| Wed Jun 13 22:32:03 [interruptThread] shutdown: removing fs lock...
m31102| Wed Jun 13 22:32:03 dbexit: really exiting now
Wed Jun 13 22:32:04 shell: stopped mongo program on port 31102
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key1",
"port" : 31100,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "d1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d1-0'
Wed Jun 13 22:32:04 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31100 --noprealloc --smallfiles --rest --replSet d1 --dbpath /data/db/d1-0
m31100| note: noprealloc may hurt performance in many applications
m31100| Wed Jun 13 22:32:04
m31100| Wed Jun 13 22:32:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31100| Wed Jun 13 22:32:04
m31100| Wed Jun 13 22:32:04 [initandlisten] MongoDB starting : pid=10376 port=31100 dbpath=/data/db/d1-0 32-bit host=tp2.10gen.cc
m31100| Wed Jun 13 22:32:04 [initandlisten] _DEBUG build (which is slower)
m31100| Wed Jun 13 22:32:04 [initandlisten]
m31100| Wed Jun 13 22:32:04 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31100| Wed Jun 13 22:32:04 [initandlisten] ** Not recommended for production.
m31100| Wed Jun 13 22:32:04 [initandlisten]
m31100| Wed Jun 13 22:32:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31100| Wed Jun 13 22:32:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31100| Wed Jun 13 22:32:04 [initandlisten] ** with --journal, the limit is lower
m31100| Wed Jun 13 22:32:04 [initandlisten]
m31100| Wed Jun 13 22:32:04 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31100| Wed Jun 13 22:32:04 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31100| Wed Jun 13 22:32:04 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31100| Wed Jun 13 22:32:04 [initandlisten] options: { dbpath: "/data/db/d1-0", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31100, replSet: "d1", rest: true, smallfiles: true }
m31100| Wed Jun 13 22:32:04 [initandlisten] opening db: admin
m31100| Wed Jun 13 22:32:04 [initandlisten] waiting for connections on port 31100
m31100| Wed Jun 13 22:32:04 [websvr] admin web console waiting for connections on port 32100
m31100| Wed Jun 13 22:32:04 [initandlisten] connection accepted from 184.173.149.242:42817 #1 (1 connection now open)
m31100| Wed Jun 13 22:32:04 [conn1] authenticate db: local { authenticate: 1, nonce: "da70c62da07ba719", user: "__system", key: "0fd8ff517a6ae7feed657a130f8280a3" }
m31100| Wed Jun 13 22:32:04 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31100| Wed Jun 13 22:32:04 [conn1] opening db: local
m31100| Wed Jun 13 22:32:04 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31100| Wed Jun 13 22:32:04 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[
connection to tp2.10gen.cc:31100,
connection to tp2.10gen.cc:31101,
connection to tp2.10gen.cc:31102
]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key1",
"port" : 31101,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "d1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d1-1'
Wed Jun 13 22:32:04 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31101 --noprealloc --smallfiles --rest --replSet d1 --dbpath /data/db/d1-1
m31100| Wed Jun 13 22:32:04 [initandlisten] connection accepted from 127.0.0.1:50908 #2 (2 connections now open)
m31100| Wed Jun 13 22:32:04 [conn2] note: no users configured in admin.system.users, allowing localhost access
m31101| note: noprealloc may hurt performance in many applications
m31101| Wed Jun 13 22:32:04
m31101| Wed Jun 13 22:32:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31101| Wed Jun 13 22:32:04
m31101| Wed Jun 13 22:32:04 [initandlisten] MongoDB starting : pid=10391 port=31101 dbpath=/data/db/d1-1 32-bit host=tp2.10gen.cc
m31101| Wed Jun 13 22:32:04 [initandlisten] _DEBUG build (which is slower)
m31101| Wed Jun 13 22:32:04 [initandlisten]
m31101| Wed Jun 13 22:32:04 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31101| Wed Jun 13 22:32:04 [initandlisten] ** Not recommended for production.
m31101| Wed Jun 13 22:32:04 [initandlisten]
m31101| Wed Jun 13 22:32:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31101| Wed Jun 13 22:32:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31101| Wed Jun 13 22:32:04 [initandlisten] ** with --journal, the limit is lower
m31101| Wed Jun 13 22:32:04 [initandlisten]
m31101| Wed Jun 13 22:32:04 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31101| Wed Jun 13 22:32:04 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31101| Wed Jun 13 22:32:04 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31101| Wed Jun 13 22:32:04 [initandlisten] options: { dbpath: "/data/db/d1-1", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "d1", rest: true, smallfiles: true }
m31101| Wed Jun 13 22:32:04 [initandlisten] opening db: admin
m31101| Wed Jun 13 22:32:04 [initandlisten] waiting for connections on port 31101
m31101| Wed Jun 13 22:32:04 [websvr] admin web console waiting for connections on port 32101
m31101| Wed Jun 13 22:32:04 [initandlisten] connection accepted from 184.173.149.242:56529 #1 (1 connection now open)
m31101| Wed Jun 13 22:32:04 [conn1] authenticate db: local { authenticate: 1, nonce: "b3f7406ba493ea1c", user: "__system", key: "fe6ca2887b76963d852ca2f342604df3" }
m31101| Wed Jun 13 22:32:04 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31101| Wed Jun 13 22:32:04 [conn1] opening db: local
m31101| Wed Jun 13 22:32:04 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31101| Wed Jun 13 22:32:04 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31101| Wed Jun 13 22:32:04 [initandlisten] connection accepted from 127.0.0.1:55219 #2 (2 connections now open)
m31101| Wed Jun 13 22:32:04 [conn2] note: no users configured in admin.system.users, allowing localhost access
[
connection to tp2.10gen.cc:31100,
connection to tp2.10gen.cc:31101,
connection to tp2.10gen.cc:31102
]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key1",
"port" : 31102,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d1",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "d1"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d1-2'
Wed Jun 13 22:32:04 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31102 --noprealloc --smallfiles --rest --replSet d1 --dbpath /data/db/d1-2
m31102| note: noprealloc may hurt performance in many applications
m31102| Wed Jun 13 22:32:04
m31102| Wed Jun 13 22:32:04 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31102| Wed Jun 13 22:32:04
m31102| Wed Jun 13 22:32:04 [initandlisten] MongoDB starting : pid=10408 port=31102 dbpath=/data/db/d1-2 32-bit host=tp2.10gen.cc
m31102| Wed Jun 13 22:32:04 [initandlisten] _DEBUG build (which is slower)
m31102| Wed Jun 13 22:32:04 [initandlisten]
m31102| Wed Jun 13 22:32:04 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31102| Wed Jun 13 22:32:04 [initandlisten] ** Not recommended for production.
m31102| Wed Jun 13 22:32:04 [initandlisten]
m31102| Wed Jun 13 22:32:04 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31102| Wed Jun 13 22:32:04 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31102| Wed Jun 13 22:32:04 [initandlisten] ** with --journal, the limit is lower
m31102| Wed Jun 13 22:32:04 [initandlisten]
m31102| Wed Jun 13 22:32:04 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31102| Wed Jun 13 22:32:04 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31102| Wed Jun 13 22:32:04 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31102| Wed Jun 13 22:32:04 [initandlisten] options: { dbpath: "/data/db/d1-2", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31102, replSet: "d1", rest: true, smallfiles: true }
m31102| Wed Jun 13 22:32:04 [initandlisten] opening db: admin
m31102| Wed Jun 13 22:32:04 [initandlisten] waiting for connections on port 31102
m31102| Wed Jun 13 22:32:04 [websvr] admin web console waiting for connections on port 32102
m31102| Wed Jun 13 22:32:04 [initandlisten] connection accepted from 184.173.149.242:52690 #1 (1 connection now open)
m31102| Wed Jun 13 22:32:04 [conn1] authenticate db: local { authenticate: 1, nonce: "9c10a2965017cdc8", user: "__system", key: "f519801a4241cbc1be2165b92fb5c6fa" }
m31102| Wed Jun 13 22:32:04 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31102| Wed Jun 13 22:32:04 [conn1] opening db: local
m31102| Wed Jun 13 22:32:04 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31102| Wed Jun 13 22:32:04 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31102| Wed Jun 13 22:32:04 [initandlisten] connection accepted from 127.0.0.1:34147 #2 (2 connections now open)
m31102| Wed Jun 13 22:32:04 [conn2] note: no users configured in admin.system.users, allowing localhost access
[
connection to tp2.10gen.cc:31100,
connection to tp2.10gen.cc:31101,
connection to tp2.10gen.cc:31102
]
{
"replSetInitiate" : {
"_id" : "d1",
"members" : [
{
"_id" : 0,
"host" : "tp2.10gen.cc:31100"
},
{
"_id" : 1,
"host" : "tp2.10gen.cc:31101"
},
{
"_id" : 2,
"host" : "tp2.10gen.cc:31102"
}
]
}
}
m31100| Wed Jun 13 22:32:04 [conn2] replSet replSetInitiate admin command received from client
m31100| Wed Jun 13 22:32:04 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31101| Wed Jun 13 22:32:04 [initandlisten] connection accepted from 184.173.149.242:56534 #3 (3 connections now open)
m31101| Wed Jun 13 22:32:04 [conn3] authenticate db: local { authenticate: 1, nonce: "e2ff258e3d9fdc5b", user: "__system", key: "98af65bc8225dafc40b7845b79bb388c" }
m31102| Wed Jun 13 22:32:04 [initandlisten] connection accepted from 184.173.149.242:52693 #3 (3 connections now open)
m31102| Wed Jun 13 22:32:04 [conn3] authenticate db: local { authenticate: 1, nonce: "3fd5c45a2fdfa2f1", user: "__system", key: "6dd232bb47221a7920b530aef6343586" }
m31100| Wed Jun 13 22:32:04 [conn2] replSet replSetInitiate all members seem up
m31100| Wed Jun 13 22:32:04 [conn2] ******
m31100| Wed Jun 13 22:32:04 [conn2] creating replication oplog of size: 40MB...
m31100| Wed Jun 13 22:32:04 [FileAllocator] allocating new datafile /data/db/d1-0/local.ns, filling with zeroes...
m31100| Wed Jun 13 22:32:04 [FileAllocator] creating directory /data/db/d1-0/_tmp
m31100| Wed Jun 13 22:32:04 [FileAllocator] done allocating datafile /data/db/d1-0/local.ns, size: 16MB, took 0.036 secs
m31100| Wed Jun 13 22:32:04 [FileAllocator] allocating new datafile /data/db/d1-0/local.0, filling with zeroes...
m31100| Wed Jun 13 22:32:05 [FileAllocator] done allocating datafile /data/db/d1-0/local.0, size: 64MB, took 0.112 secs
m31100| Wed Jun 13 22:32:05 [conn2] datafileheader::init initializing /data/db/d1-0/local.0 n:0
m31100| Wed Jun 13 22:32:05 [conn2] ******
m31100| Wed Jun 13 22:32:05 [conn2] replSet info saving a newer config version to local.system.replset
m31100| Wed Jun 13 22:32:05 [conn2] replSet saveConfigLocally done
m31100| Wed Jun 13 22:32:05 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31100| Wed Jun 13 22:32:05 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "d1", members: [ { _id: 0.0, host: "tp2.10gen.cc:31100" }, { _id: 1.0, host: "tp2.10gen.cc:31101" }, { _id: 2.0, host: "tp2.10gen.cc:31102" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:185099 r:128 w:72 reslen:112 186ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m30999| Wed Jun 13 22:32:11 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' acquired, ts : 4fd95b3b93454f4c315250f9
m30999| Wed Jun 13 22:32:11 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' unlocked.
m31000| Wed Jun 13 22:32:12 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' acquired, ts : 4fd95b3ce24b46bcab13cf44
m31000| Wed Jun 13 22:32:12 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' unlocked.
m31100| Wed Jun 13 22:32:14 [rsStart] replSet load config ok from self
m31100| Wed Jun 13 22:32:14 [rsStart] replSet I am tp2.10gen.cc:31100
m31100| Wed Jun 13 22:32:14 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31102
m31100| Wed Jun 13 22:32:14 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31101
m31100| Wed Jun 13 22:32:14 [rsStart] replSet STARTUP2
m31100| Wed Jun 13 22:32:14 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is up
m31100| Wed Jun 13 22:32:14 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is up
m31100| Wed Jun 13 22:32:14 [rsSync] replSet SECONDARY
m31101| Wed Jun 13 22:32:14 [rsStart] trying to contact tp2.10gen.cc:31100
m31100| Wed Jun 13 22:32:14 [initandlisten] connection accepted from 184.173.149.242:42827 #3 (3 connections now open)
m31100| Wed Jun 13 22:32:14 [conn3] authenticate db: local { authenticate: 1, nonce: "1e1fd54807b2c04d", user: "__system", key: "e042ee84bc84e662c7e27bd8e1061b45" }
m31101| Wed Jun 13 22:32:14 [rsStart] replSet load config ok from tp2.10gen.cc:31100
m31101| Wed Jun 13 22:32:14 [rsStart] replSet I am tp2.10gen.cc:31101
m31101| Wed Jun 13 22:32:14 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31102
m31101| Wed Jun 13 22:32:14 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31100
m31101| Wed Jun 13 22:32:14 [rsStart] replSet got config version 1 from a remote, saving locally
m31101| Wed Jun 13 22:32:14 [rsStart] replSet info saving a newer config version to local.system.replset
m31101| Wed Jun 13 22:32:14 [FileAllocator] allocating new datafile /data/db/d1-1/local.ns, filling with zeroes...
m31101| Wed Jun 13 22:32:14 [FileAllocator] creating directory /data/db/d1-1/_tmp
m31101| Wed Jun 13 22:32:14 [FileAllocator] done allocating datafile /data/db/d1-1/local.ns, size: 16MB, took 0.036 secs
m31101| Wed Jun 13 22:32:14 [FileAllocator] allocating new datafile /data/db/d1-1/local.0, filling with zeroes...
m31101| Wed Jun 13 22:32:14 [FileAllocator] done allocating datafile /data/db/d1-1/local.0, size: 16MB, took 0.034 secs
m31101| Wed Jun 13 22:32:14 [rsStart] datafileheader::init initializing /data/db/d1-1/local.0 n:0
m31101| Wed Jun 13 22:32:14 [rsStart] replSet saveConfigLocally done
m31101| Wed Jun 13 22:32:14 [rsStart] replSet STARTUP2
m31101| Wed Jun 13 22:32:14 [rsSync] ******
m31101| Wed Jun 13 22:32:14 [rsSync] creating replication oplog of size: 40MB...
m31101| Wed Jun 13 22:32:14 [FileAllocator] allocating new datafile /data/db/d1-1/local.1, filling with zeroes...
m31102| Wed Jun 13 22:32:14 [rsStart] trying to contact tp2.10gen.cc:31100
m31100| Wed Jun 13 22:32:14 [initandlisten] connection accepted from 184.173.149.242:42828 #4 (4 connections now open)
m31100| Wed Jun 13 22:32:14 [conn4] authenticate db: local { authenticate: 1, nonce: "35aea34e142876bf", user: "__system", key: "8c6b772c11ddea3dc1053684faee485c" }
m31102| Wed Jun 13 22:32:14 [rsStart] replSet load config ok from tp2.10gen.cc:31100
m31102| Wed Jun 13 22:32:14 [rsStart] replSet I am tp2.10gen.cc:31102
m31102| Wed Jun 13 22:32:14 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31101
m31102| Wed Jun 13 22:32:14 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31100
m31102| Wed Jun 13 22:32:14 [rsStart] replSet got config version 1 from a remote, saving locally
m31102| Wed Jun 13 22:32:14 [rsStart] replSet info saving a newer config version to local.system.replset
m31102| Wed Jun 13 22:32:14 [FileAllocator] allocating new datafile /data/db/d1-2/local.ns, filling with zeroes...
m31102| Wed Jun 13 22:32:14 [FileAllocator] creating directory /data/db/d1-2/_tmp
m31101| Wed Jun 13 22:32:14 [FileAllocator] done allocating datafile /data/db/d1-1/local.1, size: 64MB, took 0.118 secs
m31101| Wed Jun 13 22:32:14 [rsSync] datafileheader::init initializing /data/db/d1-1/local.1 n:1
m31101| Wed Jun 13 22:32:14 [rsSync] ******
m31101| Wed Jun 13 22:32:14 [rsSync] replSet initial sync pending
m31101| Wed Jun 13 22:32:14 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31102| Wed Jun 13 22:32:14 [FileAllocator] done allocating datafile /data/db/d1-2/local.ns, size: 16MB, took 0.036 secs
m31102| Wed Jun 13 22:32:14 [FileAllocator] allocating new datafile /data/db/d1-2/local.0, filling with zeroes...
m31102| Wed Jun 13 22:32:14 [FileAllocator] done allocating datafile /data/db/d1-2/local.0, size: 16MB, took 0.038 secs
m31102| Wed Jun 13 22:32:14 [rsStart] datafileheader::init initializing /data/db/d1-2/local.0 n:0
m31102| Wed Jun 13 22:32:14 [rsStart] replSet saveConfigLocally done
m31102| Wed Jun 13 22:32:14 [rsStart] replSet STARTUP2
m31102| Wed Jun 13 22:32:14 [rsSync] ******
m31102| Wed Jun 13 22:32:14 [rsSync] creating replication oplog of size: 40MB...
m31102| Wed Jun 13 22:32:14 [FileAllocator] allocating new datafile /data/db/d1-2/local.1, filling with zeroes...
m31102| Wed Jun 13 22:32:15 [FileAllocator] done allocating datafile /data/db/d1-2/local.1, size: 64MB, took 0.209 secs
m31102| Wed Jun 13 22:32:15 [rsSync] datafileheader::init initializing /data/db/d1-2/local.1 n:1
m31102| Wed Jun 13 22:32:15 [rsSync] ******
m31102| Wed Jun 13 22:32:15 [rsSync] replSet initial sync pending
m31102| Wed Jun 13 22:32:15 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31100| Wed Jun 13 22:32:16 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state STARTUP2
m31100| Wed Jun 13 22:32:16 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state STARTUP2
m31100| Wed Jun 13 22:32:16 [rsMgr] not electing self, tp2.10gen.cc:31102 would veto
m31100| Wed Jun 13 22:32:16 [rsMgr] not electing self, tp2.10gen.cc:31102 would veto
m31101| Wed Jun 13 22:32:16 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is up
m31102| Wed Jun 13 22:32:16 [initandlisten] connection accepted from 184.173.149.242:52696 #4 (4 connections now open)
m31101| Wed Jun 13 22:32:16 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state SECONDARY
m31102| Wed Jun 13 22:32:16 [conn4] authenticate db: local { authenticate: 1, nonce: "cebe7f3f305aa41e", user: "__system", key: "607967b2f31da840cb001406e9f428f7" }
m31101| Wed Jun 13 22:32:16 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is up
m31101| Wed Jun 13 22:32:16 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state STARTUP2
m31101| Wed Jun 13 22:32:16 [initandlisten] connection accepted from 184.173.149.242:56539 #4 (4 connections now open)
m31102| Wed Jun 13 22:32:16 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is up
m31102| Wed Jun 13 22:32:16 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state SECONDARY
m31101| Wed Jun 13 22:32:16 [conn4] authenticate db: local { authenticate: 1, nonce: "5e98b7d70f5f1536", user: "__system", key: "50a2d71a1f2c7660ab819f563ae2f45f" }
m31102| Wed Jun 13 22:32:16 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is up
m31102| Wed Jun 13 22:32:16 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state STARTUP2
m30999| Wed Jun 13 22:32:21 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' acquired, ts : 4fd95b4593454f4c315250fa
m30999| Wed Jun 13 22:32:21 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' unlocked.
m31000| Wed Jun 13 22:32:22 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' acquired, ts : 4fd95b46e24b46bcab13cf45
m31000| Wed Jun 13 22:32:22 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' unlocked.
m31100| Wed Jun 13 22:32:22 [rsMgr] replSet info electSelf 0
m31101| Wed Jun 13 22:32:22 [conn3] replSet received elect msg { replSetElect: 1, set: "d1", who: "tp2.10gen.cc:31100", whoid: 0, cfgver: 1, round: ObjectId('4fd95b468529b0c3a8f3d669') }
m31101| Wed Jun 13 22:32:22 [conn3] replSet RECOVERING
m31101| Wed Jun 13 22:32:22 [conn3] replSet info voting yea for tp2.10gen.cc:31100 (0)
m31102| Wed Jun 13 22:32:22 [conn3] replSet received elect msg { replSetElect: 1, set: "d1", who: "tp2.10gen.cc:31100", whoid: 0, cfgver: 1, round: ObjectId('4fd95b468529b0c3a8f3d669') }
m31102| Wed Jun 13 22:32:22 [conn3] replSet RECOVERING
m31102| Wed Jun 13 22:32:22 [conn3] replSet info voting yea for tp2.10gen.cc:31100 (0)
m31100| Wed Jun 13 22:32:22 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95b468529b0c3a8f3d669'), ok: 1.0 }
m31100| Wed Jun 13 22:32:22 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95b468529b0c3a8f3d669'), ok: 1.0 }
m31100| Wed Jun 13 22:32:22 [rsMgr] info PageFaultRetryableSection will not yield, already locked upon reaching
m31100| Wed Jun 13 22:32:22 [rsMgr] replSet PRIMARY
m31101| Wed Jun 13 22:32:22 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state PRIMARY
m31101| Wed Jun 13 22:32:22 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state RECOVERING
m31102| Wed Jun 13 22:32:22 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state PRIMARY
m31102| Wed Jun 13 22:32:22 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state RECOVERING
adding shard w/auth d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31000| Wed Jun 13 22:32:23 [conn] starting new replica set monitor for replica set d1 with seed of tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31100| Wed Jun 13 22:32:23 [initandlisten] connection accepted from 184.173.149.242:42831 #5 (5 connections now open)
m31000| Wed Jun 13 22:32:23 [conn] successfully connected to seed tp2.10gen.cc:31100 for replica set d1
m31000| Wed Jun 13 22:32:23 [conn] changing hosts to { 0: "tp2.10gen.cc:31100", 1: "tp2.10gen.cc:31102", 2: "tp2.10gen.cc:31101" } from d1/
m31000| Wed Jun 13 22:32:23 [conn] trying to add new host tp2.10gen.cc:31100 to replica set d1
m31100| Wed Jun 13 22:32:23 [initandlisten] connection accepted from 184.173.149.242:42832 #6 (6 connections now open)
m31000| Wed Jun 13 22:32:23 [conn] successfully connected to new host tp2.10gen.cc:31100 in replica set d1
m31000| Wed Jun 13 22:32:23 [conn] trying to add new host tp2.10gen.cc:31101 to replica set d1
m31101| Wed Jun 13 22:32:23 [initandlisten] connection accepted from 184.173.149.242:56542 #5 (5 connections now open)
m31000| Wed Jun 13 22:32:23 [conn] successfully connected to new host tp2.10gen.cc:31101 in replica set d1
m31000| Wed Jun 13 22:32:23 [conn] trying to add new host tp2.10gen.cc:31102 to replica set d1
m31000| Wed Jun 13 22:32:23 [conn] successfully connected to new host tp2.10gen.cc:31102 in replica set d1
m31102| Wed Jun 13 22:32:23 [initandlisten] connection accepted from 184.173.149.242:52701 #5 (5 connections now open)
m31100| Wed Jun 13 22:32:23 [initandlisten] connection accepted from 184.173.149.242:42835 #7 (7 connections now open)
m31100| Wed Jun 13 22:32:23 [conn7] authenticate db: local { authenticate: 1, nonce: "4a32fd8531819585", user: "__system", key: "349a0c28ef2d1b06417ce56be9eb0525" }
m31100| Wed Jun 13 22:32:23 [conn5] end connection 184.173.149.242:42831 (6 connections now open)
m31000| Wed Jun 13 22:32:23 [conn] Primary for replica set d1 changed to tp2.10gen.cc:31100
m31101| Wed Jun 13 22:32:23 [initandlisten] connection accepted from 184.173.149.242:56545 #6 (6 connections now open)
m31101| Wed Jun 13 22:32:23 [conn6] authenticate db: local { authenticate: 1, nonce: "410fc5d3155b178b", user: "__system", key: "7b0ccbfb3c8b27f95b039d99950ba75b" }
m31102| Wed Jun 13 22:32:23 [initandlisten] connection accepted from 184.173.149.242:52704 #6 (6 connections now open)
m31102| Wed Jun 13 22:32:23 [conn6] authenticate db: local { authenticate: 1, nonce: "116275f1a45f13d5", user: "__system", key: "e284c964819fcb9a995e423c77cd70f7" }
m31000| Wed Jun 13 22:32:23 [conn] replica set monitor for replica set d1 started, address is d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31000| Wed Jun 13 22:32:23 [ReplicaSetMonitorWatcher] starting
m31100| Wed Jun 13 22:32:23 [initandlisten] connection accepted from 184.173.149.242:42838 #8 (7 connections now open)
m31100| Wed Jun 13 22:32:23 [conn8] authenticate db: local { authenticate: 1, nonce: "1610a0ee662ff33c", user: "__system", key: "be254cef68dd2902f9e4beceda67a267" }
m31000| Wed Jun 13 22:32:23 [conn] going to add shard: { _id: "d1", host: "d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102" }
m31000| Wed Jun 13 22:32:23 [conn] couldn't find database [test] in config db
m31000| Wed Jun 13 22:32:23 [conn] put [test] on: d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31000| Wed Jun 13 22:32:23 [conn] enabling sharding on: test
m31100| Wed Jun 13 22:32:23 [conn8] _DEBUG ReadContext db wasn't open, will try to open test.system.indexes
m31100| Wed Jun 13 22:32:23 [conn8] opening db: test
m31000| Wed Jun 13 22:32:23 [conn] CMD: shardcollection: { shardCollection: "test.foo", key: { x: 1.0 } }
m31000| Wed Jun 13 22:32:23 [conn] enable sharding on: test.foo with shard key: { x: 1.0 }
m31000| Wed Jun 13 22:32:23 [conn] going to create 1 chunk(s) for: test.foo using new epoch 4fd95b47e24b46bcab13cf46
m31100| Wed Jun 13 22:32:23 [FileAllocator] allocating new datafile /data/db/d1-0/test.ns, filling with zeroes...
m31000| Wed Jun 13 22:32:23 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||4fd95b47e24b46bcab13cf46 based on: (empty)
m31100| Wed Jun 13 22:32:23 [FileAllocator] done allocating datafile /data/db/d1-0/test.ns, size: 16MB, took 0.035 secs
m31000| Wed Jun 13 22:32:23 [conn] DEV WARNING appendDate() called with a tiny (but nonzero) date
m29000| Wed Jun 13 22:32:23 [conn9] build index config.collections { _id: 1 }
m29000| Wed Jun 13 22:32:23 [conn9] build index done. scanned 0 total records. 0 secs
m31100| Wed Jun 13 22:32:23 [FileAllocator] allocating new datafile /data/db/d1-0/test.0, filling with zeroes...
m31100| Wed Jun 13 22:32:23 [initandlisten] connection accepted from 184.173.149.242:42839 #9 (8 connections now open)
m31100| Wed Jun 13 22:32:23 [conn9] authenticate db: local { authenticate: 1, nonce: "902827e6c503c436", user: "__system", key: "08dc8935c5d89ebd56d371bd3849636b" }
m31000| Wed Jun 13 22:32:23 [conn] creating WriteBackListener for: tp2.10gen.cc:31100 serverID: 4fd95b1ee24b46bcab13cf40
m31000| Wed Jun 13 22:32:23 [conn] creating WriteBackListener for: tp2.10gen.cc:31101 serverID: 4fd95b1ee24b46bcab13cf40
m31000| Wed Jun 13 22:32:23 [conn] creating WriteBackListener for: tp2.10gen.cc:31102 serverID: 4fd95b1ee24b46bcab13cf40
m31100| Wed Jun 13 22:32:23 [FileAllocator] done allocating datafile /data/db/d1-0/test.0, size: 16MB, took 0.042 secs
m31100| Wed Jun 13 22:32:23 [conn8] datafileheader::init initializing /data/db/d1-0/test.0 n:0
m31100| Wed Jun 13 22:32:23 [conn8] build index test.foo { _id: 1 }
m31100| Wed Jun 13 22:32:23 [conn8] build index done. scanned 0 total records. 0 secs
m31100| Wed Jun 13 22:32:23 [conn8] info: creating collection test.foo on add index
m31100| Wed Jun 13 22:32:23 [conn8] build index test.foo { x: 1.0 }
m31100| Wed Jun 13 22:32:23 [conn8] build index done. scanned 0 total records. 0 secs
m31100| Wed Jun 13 22:32:23 [conn9] no current chunk manager found for this shard, will initialize
m29000| Wed Jun 13 22:32:23 [initandlisten] connection accepted from 184.173.149.242:55493 #13 (13 connections now open)
m29000| Wed Jun 13 22:32:23 [conn13] authenticate db: local { authenticate: 1, nonce: "8401abca4c996751", user: "__system", key: "43d72c6d161ce124ead1204b208c9e10" }
ReplSetTest waitForIndicator state on connection to tp2.10gen.cc:31101
[ 2 ]
ReplSetTest waitForIndicator from node connection to tp2.10gen.cc:31101
ReplSetTest waitForIndicator Initial status ( timeout : 300000 ) :
{
"set" : "d1",
"date" : ISODate("2012-06-14T03:32:23Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "tp2.10gen.cc:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 19,
"optime" : Timestamp(1339644743000, 1),
"optimeDate" : ISODate("2012-06-14T03:32:23Z"),
"self" : true
},
{
"_id" : 1,
"name" : "tp2.10gen.cc:31101",
"health" : 1,
"state" : 5,
"stateStr" : "STARTUP2",
"uptime" : 9,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:32:22Z"),
"pingMs" : 0,
"errmsg" : "initial sync need a member to be primary or secondary to do our initial sync"
},
{
"_id" : 2,
"name" : "tp2.10gen.cc:31102",
"health" : 1,
"state" : 5,
"stateStr" : "STARTUP2",
"uptime" : 9,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:32:22Z"),
"pingMs" : 0,
"errmsg" : "initial sync need a member to be primary or secondary to do our initial sync"
}
],
"ok" : 1
}
Status for : tp2.10gen.cc:31100, checking tp2.10gen.cc:31101/tp2.10gen.cc:31101
Status for : tp2.10gen.cc:31101, checking tp2.10gen.cc:31101/tp2.10gen.cc:31101
Status : 5 target state : 2
Status for : tp2.10gen.cc:31102, checking tp2.10gen.cc:31101/tp2.10gen.cc:31101
m31100| Wed Jun 13 22:32:24 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state RECOVERING
m31100| Wed Jun 13 22:32:24 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state RECOVERING
m31101| Wed Jun 13 22:32:28 [conn3] end connection 184.173.149.242:56534 (5 connections now open)
m31101| Wed Jun 13 22:32:28 [initandlisten] connection accepted from 184.173.149.242:56550 #7 (6 connections now open)
m31101| Wed Jun 13 22:32:28 [conn7] authenticate db: local { authenticate: 1, nonce: "40043d78180b3684", user: "__system", key: "e212d5b0e703fde24c63d5d6a0878055" }
{
"set" : "d1",
"date" : ISODate("2012-06-14T03:32:29Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "tp2.10gen.cc:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 25,
"optime" : Timestamp(1339644743000, 1),
"optimeDate" : ISODate("2012-06-14T03:32:23Z"),
"self" : true
},
{
"_id" : 1,
"name" : "tp2.10gen.cc:31101",
"health" : 1,
"state" : 3,
"stateStr" : "RECOVERING",
"uptime" : 15,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:32:28Z"),
"pingMs" : 0,
"errmsg" : "initial sync need a member to be primary or secondary to do our initial sync"
},
{
"_id" : 2,
"name" : "tp2.10gen.cc:31102",
"health" : 1,
"state" : 3,
"stateStr" : "RECOVERING",
"uptime" : 15,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:32:28Z"),
"pingMs" : 0,
"errmsg" : "initial sync need a member to be primary or secondary to do our initial sync"
}
],
"ok" : 1
}
Status for : tp2.10gen.cc:31100, checking tp2.10gen.cc:31101/tp2.10gen.cc:31101
Status for : tp2.10gen.cc:31101, checking tp2.10gen.cc:31101/tp2.10gen.cc:31101
Status : 3 target state : 2
Status for : tp2.10gen.cc:31102, checking tp2.10gen.cc:31101/tp2.10gen.cc:31101
m31100| Wed Jun 13 22:32:30 [conn3] end connection 184.173.149.242:42827 (7 connections now open)
m31100| Wed Jun 13 22:32:30 [initandlisten] connection accepted from 184.173.149.242:42842 #10 (8 connections now open)
m31100| Wed Jun 13 22:32:30 [conn10] authenticate db: local { authenticate: 1, nonce: "640a67326f8436ab", user: "__system", key: "e080bfc1d3c8b32d53496fc85ff4f696" }
m31100| Wed Jun 13 22:32:30 [conn4] end connection 184.173.149.242:42828 (7 connections now open)
m31100| Wed Jun 13 22:32:30 [initandlisten] connection accepted from 184.173.149.242:42843 #11 (8 connections now open)
m31100| Wed Jun 13 22:32:30 [conn11] authenticate db: local { authenticate: 1, nonce: "1a988b6d0ca1a5f6", user: "__system", key: "177079e0be4e20e09b6acfb042ecd8ac" }
m31101| Wed Jun 13 22:32:30 [rsSync] replSet initial sync pending
m31101| Wed Jun 13 22:32:30 [rsSync] replSet syncing to: tp2.10gen.cc:31100
m31100| Wed Jun 13 22:32:30 [initandlisten] connection accepted from 184.173.149.242:42844 #12 (9 connections now open)
m31100| Wed Jun 13 22:32:30 [conn12] authenticate db: local { authenticate: 1, nonce: "78452a3300d3d96a", user: "__system", key: "1e9a4b6803da5cf8c6cf9f89d9ef2526" }
m31101| Wed Jun 13 22:32:30 [rsSync] build index local.me { _id: 1 }
m31101| Wed Jun 13 22:32:30 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Wed Jun 13 22:32:30 [rsSync] replSet initial sync drop all databases
m31101| Wed Jun 13 22:32:30 [rsSync] dropAllDatabasesExceptLocal 1
m31101| Wed Jun 13 22:32:30 [rsSync] replSet initial sync clone all databases
m31101| Wed Jun 13 22:32:30 [rsSync] replSet initial sync cloning db: test
m31101| Wed Jun 13 22:32:30 [rsSync] opening db: test
m31100| Wed Jun 13 22:32:30 [initandlisten] connection accepted from 184.173.149.242:42845 #13 (10 connections now open)
m31100| Wed Jun 13 22:32:30 [conn13] authenticate db: local { authenticate: 1, nonce: "b5dbdd9fa42177a0", user: "__system", key: "50c3fc9aa7b4638e8c32fa885a3294d0" }
m31101| Wed Jun 13 22:32:30 [FileAllocator] allocating new datafile /data/db/d1-1/test.ns, filling with zeroes...
m31101| Wed Jun 13 22:32:30 [FileAllocator] done allocating datafile /data/db/d1-1/test.ns, size: 16MB, took 0.047 secs
m31101| Wed Jun 13 22:32:30 [FileAllocator] allocating new datafile /data/db/d1-1/test.0, filling with zeroes...
m31101| Wed Jun 13 22:32:30 [FileAllocator] done allocating datafile /data/db/d1-1/test.0, size: 16MB, took 0.039 secs
m31101| Wed Jun 13 22:32:30 [rsSync] datafileheader::init initializing /data/db/d1-1/test.0 n:0
m31101| Wed Jun 13 22:32:30 [rsSync] build index test.foo { _id: 1 }
m31101| Wed Jun 13 22:32:30 [rsSync] fastBuildIndex dupsToDrop:0
m31101| Wed Jun 13 22:32:30 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Wed Jun 13 22:32:30 [rsSync] replSet initial sync cloning db: admin
m31100| Wed Jun 13 22:32:30 [conn13] end connection 184.173.149.242:42845 (9 connections now open)
m31100| Wed Jun 13 22:32:30 [initandlisten] connection accepted from 184.173.149.242:42846 #14 (10 connections now open)
m31100| Wed Jun 13 22:32:30 [conn14] authenticate db: local { authenticate: 1, nonce: "f5c4b21ebb34d1c7", user: "__system", key: "e417b9b5c814318df707d69be24adcc6" }
m31101| Wed Jun 13 22:32:30 [rsSync] replSet initial sync data copy, starting syncup
m31100| Wed Jun 13 22:32:30 [conn14] end connection 184.173.149.242:42846 (9 connections now open)
m31101| Wed Jun 13 22:32:30 [rsSync] build index test.foo { x: 1.0 }
m31101| Wed Jun 13 22:32:30 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Wed Jun 13 22:32:30 [rsSync] replSet initial sync building indexes
m31101| Wed Jun 13 22:32:30 [rsSync] replSet initial sync cloning indexes for : test
m31100| Wed Jun 13 22:32:30 [initandlisten] connection accepted from 184.173.149.242:42847 #15 (10 connections now open)
m31100| Wed Jun 13 22:32:30 [conn15] authenticate db: local { authenticate: 1, nonce: "41566b6814d95250", user: "__system", key: "9cb0712977d5d35ad743bee1be8524e1" }
m31101| Wed Jun 13 22:32:30 [rsSync] replSet initial sync cloning indexes for : admin
m31100| Wed Jun 13 22:32:30 [conn15] end connection 184.173.149.242:42847 (9 connections now open)
m31100| Wed Jun 13 22:32:30 [initandlisten] connection accepted from 184.173.149.242:42848 #16 (10 connections now open)
m31100| Wed Jun 13 22:32:30 [conn16] authenticate db: local { authenticate: 1, nonce: "8f28e0abcd9a077a", user: "__system", key: "bc87550f50ebb0f76d020ff2deb138a4" }
m31101| Wed Jun 13 22:32:30 [rsSync] replSet initial sync query minValid
m31100| Wed Jun 13 22:32:30 [conn16] end connection 184.173.149.242:42848 (9 connections now open)
m31101| Wed Jun 13 22:32:30 [rsSync] replSet initial sync finishing up
m31101| Wed Jun 13 22:32:30 [rsSync] replSet set minValid=4fd95b47:1
m31101| Wed Jun 13 22:32:30 [rsSync] build index local.replset.minvalid { _id: 1 }
m31101| Wed Jun 13 22:32:30 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Wed Jun 13 22:32:30 [rsSync] replSet initial sync done
m31100| Wed Jun 13 22:32:30 [conn12] end connection 184.173.149.242:42844 (8 connections now open)
m31102| Wed Jun 13 22:32:31 [rsSync] replSet initial sync pending
m31102| Wed Jun 13 22:32:31 [rsSync] replSet syncing to: tp2.10gen.cc:31100
m31100| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:42849 #17 (9 connections now open)
m31100| Wed Jun 13 22:32:31 [conn17] authenticate db: local { authenticate: 1, nonce: "3edaac0a3957fa1c", user: "__system", key: "bf3499d5f43f77348130536481547caa" }
m31102| Wed Jun 13 22:32:31 [rsSync] build index local.me { _id: 1 }
m31102| Wed Jun 13 22:32:31 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Wed Jun 13 22:32:31 [rsSync] replSet initial sync drop all databases
m31102| Wed Jun 13 22:32:31 [rsSync] dropAllDatabasesExceptLocal 1
m31102| Wed Jun 13 22:32:31 [rsSync] replSet initial sync clone all databases
m31102| Wed Jun 13 22:32:31 [rsSync] replSet initial sync cloning db: test
m31102| Wed Jun 13 22:32:31 [rsSync] opening db: test
m31100| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:42850 #18 (10 connections now open)
m31100| Wed Jun 13 22:32:31 [conn18] authenticate db: local { authenticate: 1, nonce: "d260e2bac029686b", user: "__system", key: "9880aa9f24e9bd3417cd113e2b452c1b" }
m31102| Wed Jun 13 22:32:31 [FileAllocator] allocating new datafile /data/db/d1-2/test.ns, filling with zeroes...
m31102| Wed Jun 13 22:32:31 [FileAllocator] done allocating datafile /data/db/d1-2/test.ns, size: 16MB, took 0.045 secs
m31102| Wed Jun 13 22:32:31 [FileAllocator] allocating new datafile /data/db/d1-2/test.0, filling with zeroes...
m31102| Wed Jun 13 22:32:31 [FileAllocator] done allocating datafile /data/db/d1-2/test.0, size: 16MB, took 0.035 secs
m31102| Wed Jun 13 22:32:31 [rsSync] datafileheader::init initializing /data/db/d1-2/test.0 n:0
m31102| Wed Jun 13 22:32:31 [rsSync] build index test.foo { _id: 1 }
m31102| Wed Jun 13 22:32:31 [rsSync] fastBuildIndex dupsToDrop:0
m31102| Wed Jun 13 22:32:31 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Wed Jun 13 22:32:31 [rsSync] replSet initial sync cloning db: admin
m31100| Wed Jun 13 22:32:31 [conn18] end connection 184.173.149.242:42850 (9 connections now open)
m31100| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:42851 #19 (10 connections now open)
m31100| Wed Jun 13 22:32:31 [conn19] authenticate db: local { authenticate: 1, nonce: "41638a82d2b9eb8", user: "__system", key: "882cdfe7c99f1c7f95649f308bb83676" }
m31100| Wed Jun 13 22:32:31 [conn19] end connection 184.173.149.242:42851 (9 connections now open)
m31102| Wed Jun 13 22:32:31 [rsSync] replSet initial sync data copy, starting syncup
m31102| Wed Jun 13 22:32:31 [rsSync] build index test.foo { x: 1.0 }
m31102| Wed Jun 13 22:32:31 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Wed Jun 13 22:32:31 [rsSync] replSet initial sync building indexes
m31102| Wed Jun 13 22:32:31 [rsSync] replSet initial sync cloning indexes for : test
m31100| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:42852 #20 (10 connections now open)
m31100| Wed Jun 13 22:32:31 [conn20] authenticate db: local { authenticate: 1, nonce: "b671fbf074fb43cd", user: "__system", key: "c803d9d4cc712fda949e2481df420983" }
m31100| Wed Jun 13 22:32:31 [conn20] end connection 184.173.149.242:42852 (9 connections now open)
m31102| Wed Jun 13 22:32:31 [rsSync] replSet initial sync cloning indexes for : admin
m31100| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:42853 #21 (10 connections now open)
m31100| Wed Jun 13 22:32:31 [conn21] authenticate db: local { authenticate: 1, nonce: "457d01658dec1cbf", user: "__system", key: "71aea9c61872ec08e5569d3fdeb068d3" }
m31100| Wed Jun 13 22:32:31 [conn21] end connection 184.173.149.242:42853 (9 connections now open)
m31102| Wed Jun 13 22:32:31 [rsSync] replSet initial sync query minValid
m31102| Wed Jun 13 22:32:31 [rsSync] replSet initial sync finishing up
m31102| Wed Jun 13 22:32:31 [rsSync] replSet set minValid=4fd95b47:1
m31102| Wed Jun 13 22:32:31 [rsSync] build index local.replset.minvalid { _id: 1 }
m31102| Wed Jun 13 22:32:31 [rsSync] build index done. scanned 0 total records. 0 secs
m31100| Wed Jun 13 22:32:31 [conn17] end connection 184.173.149.242:42849 (8 connections now open)
m31102| Wed Jun 13 22:32:31 [rsSync] replSet initial sync done
m31101| Wed Jun 13 22:32:31 [rsBackgroundSync] replSet syncing to: tp2.10gen.cc:31100
m31100| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:42854 #22 (9 connections now open)
m31100| Wed Jun 13 22:32:31 [conn22] authenticate db: local { authenticate: 1, nonce: "4653cf588343e807", user: "__system", key: "f1a526870c118921d32a6edc742ab99d" }
m31101| Wed Jun 13 22:32:31 [rsBackgroundSync] replSet remoteOldestOp: Jun 13 22:32:05 4fd95b35:1
m31101| Wed Jun 13 22:32:31 [rsBackgroundSync] replSet lastOpTimeFetched: Jun 13 22:32:23 4fd95b47:1
m31100| Wed Jun 13 22:32:31 [conn22] query has no more but tailable, cursorid: 3841827226317008651
m31102| Wed Jun 13 22:32:31 [rsBackgroundSync] replSet syncing to: tp2.10gen.cc:31100
m31100| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:42855 #23 (10 connections now open)
m31100| Wed Jun 13 22:32:31 [conn23] authenticate db: local { authenticate: 1, nonce: "ee932743dfa7718c", user: "__system", key: "1167eb80abfb48ec0f9b891328112ff9" }
m31102| Wed Jun 13 22:32:31 [rsBackgroundSync] replSet remoteOldestOp: Jun 13 22:32:05 4fd95b35:1
m31102| Wed Jun 13 22:32:31 [rsBackgroundSync] replSet lastOpTimeFetched: Jun 13 22:32:23 4fd95b47:1
m31100| Wed Jun 13 22:32:31 [conn23] query has no more but tailable, cursorid: 8063521744592089051
m31101| Wed Jun 13 22:32:31 [rsSyncNotifier] replset setting oplog notifier to tp2.10gen.cc:31100
m31100| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:42856 #24 (11 connections now open)
m31100| Wed Jun 13 22:32:31 [conn24] authenticate db: local { authenticate: 1, nonce: "2ec53dc558f08aa3", user: "__system", key: "7212be4c98a1b09f0e5e2e1473a4ba6b" }
m31100| Wed Jun 13 22:32:31 [conn24] query has no more but tailable, cursorid: 4822143134988021748
m31101| Wed Jun 13 22:32:31 [rsSync] replSet SECONDARY
m30999| Wed Jun 13 22:32:31 [Balancer] starting new replica set monitor for replica set d1 with seed of tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31100| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:42857 #25 (12 connections now open)
m30999| Wed Jun 13 22:32:31 [Balancer] successfully connected to seed tp2.10gen.cc:31100 for replica set d1
m30999| Wed Jun 13 22:32:31 [Balancer] changing hosts to { 0: "tp2.10gen.cc:31100", 1: "tp2.10gen.cc:31102", 2: "tp2.10gen.cc:31101" } from d1/
m30999| Wed Jun 13 22:32:31 [Balancer] trying to add new host tp2.10gen.cc:31100 to replica set d1
m30999| Wed Jun 13 22:32:31 [Balancer] successfully connected to new host tp2.10gen.cc:31100 in replica set d1
m30999| Wed Jun 13 22:32:31 [Balancer] trying to add new host tp2.10gen.cc:31101 to replica set d1
m31100| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:42858 #26 (13 connections now open)
m30999| Wed Jun 13 22:32:31 [Balancer] successfully connected to new host tp2.10gen.cc:31101 in replica set d1
m30999| Wed Jun 13 22:32:31 [Balancer] trying to add new host tp2.10gen.cc:31102 to replica set d1
m31101| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:56568 #8 (7 connections now open)
m30999| Wed Jun 13 22:32:31 [Balancer] successfully connected to new host tp2.10gen.cc:31102 in replica set d1
m31102| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:52727 #7 (7 connections now open)
m31100| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:42861 #27 (14 connections now open)
m31100| Wed Jun 13 22:32:31 [conn27] authenticate db: local { authenticate: 1, nonce: "132e01b06ddf6241", user: "__system", key: "cce54323499e94fbba4b98295838cbc5" }
m31100| Wed Jun 13 22:32:31 [conn25] end connection 184.173.149.242:42857 (13 connections now open)
m30999| Wed Jun 13 22:32:31 [Balancer] Primary for replica set d1 changed to tp2.10gen.cc:31100
m31101| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:56571 #9 (8 connections now open)
m31101| Wed Jun 13 22:32:31 [conn9] authenticate db: local { authenticate: 1, nonce: "7cdabc7a636e03b1", user: "__system", key: "7b89d050e86f1c403414958c599948cc" }
m31102| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:52730 #8 (8 connections now open)
m31102| Wed Jun 13 22:32:31 [conn8] authenticate db: local { authenticate: 1, nonce: "a5344e2fb3fe137c", user: "__system", key: "faa1caa4071072eb6c460103ce3fc33f" }
m30999| Wed Jun 13 22:32:31 [Balancer] replica set monitor for replica set d1 started, address is d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31100| Wed Jun 13 22:32:31 [initandlisten] connection accepted from 184.173.149.242:42864 #28 (14 connections now open)
m31100| Wed Jun 13 22:32:31 [conn28] authenticate db: local { authenticate: 1, nonce: "45e5c07279e72c29", user: "__system", key: "87e0fe4876b0b5e31288cc9eb3548773" }
m30999| Wed Jun 13 22:32:31 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' acquired, ts : 4fd95b4f93454f4c315250fb
m30999| Wed Jun 13 22:32:31 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' unlocked.
m31102| Wed Jun 13 22:32:32 [rsSyncNotifier] replset setting oplog notifier to tp2.10gen.cc:31100
m31100| Wed Jun 13 22:32:32 [initandlisten] connection accepted from 184.173.149.242:42865 #29 (15 connections now open)
m31100| Wed Jun 13 22:32:32 [conn29] authenticate db: local { authenticate: 1, nonce: "1e300b0363af71a5", user: "__system", key: "1d27e26cbb6891162d9b07d91d2d11f1" }
m31100| Wed Jun 13 22:32:32 [conn29] query has no more but tailable, cursorid: 1091043721409769766
m31102| Wed Jun 13 22:32:32 [rsSync] replSet SECONDARY
m31000| Wed Jun 13 22:32:32 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' acquired, ts : 4fd95b50e24b46bcab13cf47
m31000| Wed Jun 13 22:32:32 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' unlocked.
m31100| Wed Jun 13 22:32:32 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state SECONDARY
m31100| Wed Jun 13 22:32:32 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state SECONDARY
m31101| Wed Jun 13 22:32:32 [rsHealthPoll] replSet member tp2.10gen.cc:31102 is now in state SECONDARY
m31102| Wed Jun 13 22:32:32 [rsHealthPoll] replSet member tp2.10gen.cc:31101 is now in state SECONDARY
m31100| Wed Jun 13 22:32:33 [initandlisten] connection accepted from 184.173.149.242:42866 #30 (16 connections now open)
m31100| Wed Jun 13 22:32:33 [conn30] authenticate db: local { authenticate: 1, nonce: "20a7ad7726c51a24", user: "__system", key: "b2a409ba3394a2e001a97c8bb482f687" }
m31101| Wed Jun 13 22:32:33 [initandlisten] connection accepted from 184.173.149.242:56576 #10 (9 connections now open)
m31101| Wed Jun 13 22:32:33 [conn10] authenticate db: local { authenticate: 1, nonce: "54177501dbbbb7fb", user: "__system", key: "db0aaedff2459fda2422c7ee0d826a16" }
m31102| Wed Jun 13 22:32:33 [initandlisten] connection accepted from 184.173.149.242:52735 #9 (9 connections now open)
m31102| Wed Jun 13 22:32:33 [conn9] authenticate db: local { authenticate: 1, nonce: "acee760f4cdfef49", user: "__system", key: "8cf8f04eea0f9c950619ca0aef297e16" }
ReplSetTest waitForIndicator final status:
{
"set" : "d1",
"date" : ISODate("2012-06-14T03:32:33Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "tp2.10gen.cc:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 29,
"optime" : Timestamp(1339644743000, 1),
"optimeDate" : ISODate("2012-06-14T03:32:23Z"),
"self" : true
},
{
"_id" : 1,
"name" : "tp2.10gen.cc:31101",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 19,
"optime" : Timestamp(1339644743000, 1),
"optimeDate" : ISODate("2012-06-14T03:32:23Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:32:32Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "tp2.10gen.cc:31102",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 19,
"optime" : Timestamp(1339644743000, 1),
"optimeDate" : ISODate("2012-06-14T03:32:23Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:32:32Z"),
"pingMs" : 0
}
],
"ok" : 1
}
ReplSetTest waitForIndicator state on connection to tp2.10gen.cc:31102
[ 2 ]
ReplSetTest waitForIndicator from node connection to tp2.10gen.cc:31102
ReplSetTest waitForIndicator Initial status ( timeout : 300000 ) :
{
"set" : "d1",
"date" : ISODate("2012-06-14T03:32:33Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "tp2.10gen.cc:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 29,
"optime" : Timestamp(1339644743000, 1),
"optimeDate" : ISODate("2012-06-14T03:32:23Z"),
"self" : true
},
{
"_id" : 1,
"name" : "tp2.10gen.cc:31101",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 19,
"optime" : Timestamp(1339644743000, 1),
"optimeDate" : ISODate("2012-06-14T03:32:23Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:32:32Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "tp2.10gen.cc:31102",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 19,
"optime" : Timestamp(1339644743000, 1),
"optimeDate" : ISODate("2012-06-14T03:32:23Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:32:32Z"),
"pingMs" : 0
}
],
"ok" : 1
}
Status for : tp2.10gen.cc:31100, checking tp2.10gen.cc:31102/tp2.10gen.cc:31102
Status for : tp2.10gen.cc:31101, checking tp2.10gen.cc:31102/tp2.10gen.cc:31102
Status for : tp2.10gen.cc:31102, checking tp2.10gen.cc:31102/tp2.10gen.cc:31102
Status : 2 target state : 2
ReplSetTest waitForIndicator final status:
{
"set" : "d1",
"date" : ISODate("2012-06-14T03:32:33Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "tp2.10gen.cc:31100",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 29,
"optime" : Timestamp(1339644743000, 1),
"optimeDate" : ISODate("2012-06-14T03:32:23Z"),
"self" : true
},
{
"_id" : 1,
"name" : "tp2.10gen.cc:31101",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 19,
"optime" : Timestamp(1339644743000, 1),
"optimeDate" : ISODate("2012-06-14T03:32:23Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:32:32Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "tp2.10gen.cc:31102",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 19,
"optime" : Timestamp(1339644743000, 1),
"optimeDate" : ISODate("2012-06-14T03:32:23Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:32:32Z"),
"pingMs" : 0
}
],
"ok" : 1
}
{
"user" : "bar",
"readOnly" : false,
"pwd" : "131d1786e1320446336c3943bfc7ba1c",
"_id" : ObjectId("4fd95b51e204cf4c84a13ae9")
}
m31100| Wed Jun 13 22:32:33 [conn9] build index test.system.users { _id: 1 }
m31100| Wed Jun 13 22:32:33 [conn9] build index done. scanned 0 total records. 0 secs
m31101| Wed Jun 13 22:32:33 [rsSync] build index test.system.users { _id: 1 }
m31101| Wed Jun 13 22:32:33 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Wed Jun 13 22:32:34 [rsSync] build index test.system.users { _id: 1 }
m31102| Wed Jun 13 22:32:34 [rsSync] build index done. scanned 0 total records. 0 secs
m31100| Wed Jun 13 22:32:34 [conn9] command admin.$cmd command: { getlasterror: 1.0, w: 3.0, wtimeout: 30000.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:275 r:309 w:1484 reslen:94 961ms
{
"user" : "sad",
"readOnly" : true,
"pwd" : "b874a27b7105ec1cfd1f26a5f7d27eca",
"_id" : ObjectId("4fd95b52e204cf4c84a13aea")
}
query try
m31000| Wed Jun 13 22:32:34 [conn] couldn't find database [foo] in config db
m31000| Wed Jun 13 22:32:34 [conn] put [foo] on: d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
"error { \"$err\" : \"unauthorized for db:foo level: 1\", \"code\" : 15845 }"
cmd try
"error { \"$err\" : \"unrecognized command: listdbs\", \"code\" : 13390 }"
insert try 1
m31000| Wed Jun 13 22:32:34 [conn] authenticate db: test { authenticate: 1.0, user: "bar", nonce: "138441b01fad3adf", key: "212a6b01fd01117fef9598d6f5604625" }
m31100| Wed Jun 13 22:32:34 [initandlisten] connection accepted from 184.173.149.242:42869 #31 (17 connections now open)
m31100| Wed Jun 13 22:32:34 [conn31] authenticate db: local { authenticate: 1, nonce: "cc4ab6f94fcbbcd2", user: "__system", key: "ad44372c0e4be5bd83e9719372a0dc70" }
m31101| Wed Jun 13 22:32:34 [initandlisten] connection accepted from 184.173.149.242:56579 #11 (10 connections now open)
m31101| Wed Jun 13 22:32:34 [conn11] authenticate db: local { authenticate: 1, nonce: "48c5aea6b38f8ff0", user: "__system", key: "9007deb0260ec2b029603821dfebb474" }
{ "dbname" : "test", "user" : "bar", "readOnly" : false, "ok" : 1 }
m31000| range.universal(): 1
insert try 2
m31000| range.universal(): 1
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31200, 31201, 31202 ] 31200 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key1",
"port" : 31200,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d2",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 0,
"set" : "d2"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d2-0'
Wed Jun 13 22:32:34 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31200 --noprealloc --smallfiles --rest --replSet d2 --dbpath /data/db/d2-0
m31200| note: noprealloc may hurt performance in many applications
m31200| Wed Jun 13 22:32:34
m31200| Wed Jun 13 22:32:34 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31200| Wed Jun 13 22:32:34
m31200| Wed Jun 13 22:32:34 [initandlisten] MongoDB starting : pid=10550 port=31200 dbpath=/data/db/d2-0 32-bit host=tp2.10gen.cc
m31200| Wed Jun 13 22:32:34 [initandlisten] _DEBUG build (which is slower)
m31200| Wed Jun 13 22:32:34 [initandlisten]
m31200| Wed Jun 13 22:32:34 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31200| Wed Jun 13 22:32:34 [initandlisten] ** Not recommended for production.
m31200| Wed Jun 13 22:32:34 [initandlisten]
m31200| Wed Jun 13 22:32:34 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31200| Wed Jun 13 22:32:34 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31200| Wed Jun 13 22:32:34 [initandlisten] ** with --journal, the limit is lower
m31200| Wed Jun 13 22:32:34 [initandlisten]
m31200| Wed Jun 13 22:32:34 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31200| Wed Jun 13 22:32:34 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31200| Wed Jun 13 22:32:34 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31200| Wed Jun 13 22:32:34 [initandlisten] options: { dbpath: "/data/db/d2-0", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31200, replSet: "d2", rest: true, smallfiles: true }
m31200| Wed Jun 13 22:32:34 [initandlisten] opening db: admin
m31200| Wed Jun 13 22:32:34 [initandlisten] waiting for connections on port 31200
m31200| Wed Jun 13 22:32:34 [websvr] admin web console waiting for connections on port 32200
m31200| Wed Jun 13 22:32:34 [initandlisten] connection accepted from 184.173.149.242:41164 #1 (1 connection now open)
m31200| Wed Jun 13 22:32:34 [conn1] authenticate db: local { authenticate: 1, nonce: "a30912e9156f2fd2", user: "__system", key: "8f561c46166638359b92c3e8af2906b8" }
m31200| Wed Jun 13 22:32:34 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31200| Wed Jun 13 22:32:34 [conn1] opening db: local
m31200| Wed Jun 13 22:32:34 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31200| Wed Jun 13 22:32:34 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31200| Wed Jun 13 22:32:34 [initandlisten] connection accepted from 127.0.0.1:56156 #2 (2 connections now open)
m31200| Wed Jun 13 22:32:34 [conn2] note: no users configured in admin.system.users, allowing localhost access
[ connection to tp2.10gen.cc:31200 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31200, 31201, 31202 ] 31201 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key1",
"port" : 31201,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d2",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 1,
"set" : "d2"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d2-1'
Wed Jun 13 22:32:34 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31201 --noprealloc --smallfiles --rest --replSet d2 --dbpath /data/db/d2-1
m31201| note: noprealloc may hurt performance in many applications
m31201| Wed Jun 13 22:32:34
m31201| Wed Jun 13 22:32:34 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31201| Wed Jun 13 22:32:34
m31201| Wed Jun 13 22:32:34 [initandlisten] MongoDB starting : pid=10566 port=31201 dbpath=/data/db/d2-1 32-bit host=tp2.10gen.cc
m31201| Wed Jun 13 22:32:34 [initandlisten] _DEBUG build (which is slower)
m31201| Wed Jun 13 22:32:34 [initandlisten]
m31201| Wed Jun 13 22:32:34 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31201| Wed Jun 13 22:32:34 [initandlisten] ** Not recommended for production.
m31201| Wed Jun 13 22:32:34 [initandlisten]
m31201| Wed Jun 13 22:32:34 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31201| Wed Jun 13 22:32:34 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31201| Wed Jun 13 22:32:34 [initandlisten] ** with --journal, the limit is lower
m31201| Wed Jun 13 22:32:34 [initandlisten]
m31201| Wed Jun 13 22:32:34 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31201| Wed Jun 13 22:32:34 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31201| Wed Jun 13 22:32:34 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31201| Wed Jun 13 22:32:34 [initandlisten] options: { dbpath: "/data/db/d2-1", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31201, replSet: "d2", rest: true, smallfiles: true }
m31201| Wed Jun 13 22:32:34 [initandlisten] opening db: admin
m31201| Wed Jun 13 22:32:34 [initandlisten] waiting for connections on port 31201
m31201| Wed Jun 13 22:32:34 [websvr] admin web console waiting for connections on port 32201
m31201| Wed Jun 13 22:32:34 [initandlisten] connection accepted from 184.173.149.242:59488 #1 (1 connection now open)
m31201| Wed Jun 13 22:32:34 [conn1] authenticate db: local { authenticate: 1, nonce: "9b2f01a3d970104f", user: "__system", key: "20f68296001ade6460467fb3fb22accd" }
m31201| Wed Jun 13 22:32:34 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31201| Wed Jun 13 22:32:34 [conn1] opening db: local
m31201| Wed Jun 13 22:32:34 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31201| Wed Jun 13 22:32:34 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31201| Wed Jun 13 22:32:34 [initandlisten] connection accepted from 127.0.0.1:51286 #2 (2 connections now open)
m31201| Wed Jun 13 22:32:34 [conn2] note: no users configured in admin.system.users, allowing localhost access
[ connection to tp2.10gen.cc:31200, connection to tp2.10gen.cc:31201 ]
ReplSetTest n is : 2
ReplSetTest n: 2 ports: [ 31200, 31201, 31202 ] 31202 number
{
"useHostName" : true,
"oplogSize" : 40,
"keyFile" : "jstests/libs/key1",
"port" : 31202,
"noprealloc" : "",
"smallfiles" : "",
"rest" : "",
"replSet" : "d2",
"dbpath" : "$set-$node",
"restart" : undefined,
"pathOpts" : {
"node" : 2,
"set" : "d2"
}
}
ReplSetTest Starting....
Resetting db path '/data/db/d2-2'
Wed Jun 13 22:32:34 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31202 --noprealloc --smallfiles --rest --replSet d2 --dbpath /data/db/d2-2
m31202| note: noprealloc may hurt performance in many applications
m31202| Wed Jun 13 22:32:34
m31202| Wed Jun 13 22:32:34 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m31202| Wed Jun 13 22:32:34
m31202| Wed Jun 13 22:32:34 [initandlisten] MongoDB starting : pid=10582 port=31202 dbpath=/data/db/d2-2 32-bit host=tp2.10gen.cc
m31202| Wed Jun 13 22:32:34 [initandlisten] _DEBUG build (which is slower)
m31202| Wed Jun 13 22:32:34 [initandlisten]
m31202| Wed Jun 13 22:32:34 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m31202| Wed Jun 13 22:32:34 [initandlisten] ** Not recommended for production.
m31202| Wed Jun 13 22:32:34 [initandlisten]
m31202| Wed Jun 13 22:32:34 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m31202| Wed Jun 13 22:32:34 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m31202| Wed Jun 13 22:32:34 [initandlisten] ** with --journal, the limit is lower
m31202| Wed Jun 13 22:32:34 [initandlisten]
m31202| Wed Jun 13 22:32:34 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m31202| Wed Jun 13 22:32:34 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m31202| Wed Jun 13 22:32:34 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m31202| Wed Jun 13 22:32:34 [initandlisten] options: { dbpath: "/data/db/d2-2", keyFile: "jstests/libs/key1", noprealloc: true, oplogSize: 40, port: 31202, replSet: "d2", rest: true, smallfiles: true }
m31202| Wed Jun 13 22:32:34 [initandlisten] opening db: admin
m31202| Wed Jun 13 22:32:34 [initandlisten] waiting for connections on port 31202
m31202| Wed Jun 13 22:32:34 [websvr] admin web console waiting for connections on port 32202
m31202| Wed Jun 13 22:32:34 [initandlisten] connection accepted from 184.173.149.242:42934 #1 (1 connection now open)
m31202| Wed Jun 13 22:32:34 [conn1] authenticate db: local { authenticate: 1, nonce: "f394a01788790b33", user: "__system", key: "1c1bc0de61c5a854812068963367f6f8" }
m31202| Wed Jun 13 22:32:34 [conn1] _DEBUG ReadContext db wasn't open, will try to open local.system.replset
m31202| Wed Jun 13 22:32:34 [conn1] opening db: local
m31202| Wed Jun 13 22:32:34 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
m31202| Wed Jun 13 22:32:34 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
m31202| Wed Jun 13 22:32:34 [initandlisten] connection accepted from 127.0.0.1:55326 #2 (2 connections now open)
m31202| Wed Jun 13 22:32:34 [conn2] note: no users configured in admin.system.users, allowing localhost access
[
connection to tp2.10gen.cc:31200,
connection to tp2.10gen.cc:31201,
connection to tp2.10gen.cc:31202
]
{
"replSetInitiate" : {
"_id" : "d2",
"members" : [
{
"_id" : 0,
"host" : "tp2.10gen.cc:31200"
},
{
"_id" : 1,
"host" : "tp2.10gen.cc:31201"
},
{
"_id" : 2,
"host" : "tp2.10gen.cc:31202"
}
]
}
}
m31200| Wed Jun 13 22:32:34 [conn2] replSet replSetInitiate admin command received from client
m31200| Wed Jun 13 22:32:34 [conn2] replSet replSetInitiate config object parses ok, 3 members specified
m31201| Wed Jun 13 22:32:34 [initandlisten] connection accepted from 184.173.149.242:59493 #3 (3 connections now open)
m31201| Wed Jun 13 22:32:34 [conn3] authenticate db: local { authenticate: 1, nonce: "2a02ea4609124b0c", user: "__system", key: "c8e170700b8e116728bac43cd72dffa5" }
m31202| Wed Jun 13 22:32:34 [initandlisten] connection accepted from 184.173.149.242:42937 #3 (3 connections now open)
m31202| Wed Jun 13 22:32:34 [conn3] authenticate db: local { authenticate: 1, nonce: "c96323e45ecc5962", user: "__system", key: "2fa3e8103920a74bfe9c9ac5a846bf22" }
m31200| Wed Jun 13 22:32:34 [conn2] replSet replSetInitiate all members seem up
m31200| Wed Jun 13 22:32:34 [conn2] ******
m31200| Wed Jun 13 22:32:34 [conn2] creating replication oplog of size: 40MB...
m31200| Wed Jun 13 22:32:34 [FileAllocator] allocating new datafile /data/db/d2-0/local.ns, filling with zeroes...
m31200| Wed Jun 13 22:32:34 [FileAllocator] creating directory /data/db/d2-0/_tmp
m31200| Wed Jun 13 22:32:34 [FileAllocator] done allocating datafile /data/db/d2-0/local.ns, size: 16MB, took 0.041 secs
m31200| Wed Jun 13 22:32:34 [FileAllocator] allocating new datafile /data/db/d2-0/local.0, filling with zeroes...
m31200| Wed Jun 13 22:32:35 [FileAllocator] done allocating datafile /data/db/d2-0/local.0, size: 64MB, took 0.128 secs
m31200| Wed Jun 13 22:32:35 [conn2] datafileheader::init initializing /data/db/d2-0/local.0 n:0
m31200| Wed Jun 13 22:32:35 [conn2] ******
m31200| Wed Jun 13 22:32:35 [conn2] replSet info saving a newer config version to local.system.replset
m31200| Wed Jun 13 22:32:35 [conn2] replSet saveConfigLocally done
m31200| Wed Jun 13 22:32:35 [conn2] replSet replSetInitiate config now saved locally. Should come online in about a minute.
m31200| Wed Jun 13 22:32:35 [conn2] command admin.$cmd command: { replSetInitiate: { _id: "d2", members: [ { _id: 0.0, host: "tp2.10gen.cc:31200" }, { _id: 1.0, host: "tp2.10gen.cc:31201" }, { _id: 2.0, host: "tp2.10gen.cc:31202" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:203652 r:119 w:72 reslen:112 205ms
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
m29000| Wed Jun 13 22:32:41 [clientcursormon] mem (MB) res:52 virt:196 mapped:64
m30999| Wed Jun 13 22:32:41 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' acquired, ts : 4fd95b5993454f4c315250fc
m30999| Wed Jun 13 22:32:42 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' unlocked.
m31000| Wed Jun 13 22:32:42 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' acquired, ts : 4fd95b5ae24b46bcab13cf48
m31000| Wed Jun 13 22:32:42 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' unlocked.
m31102| Wed Jun 13 22:32:42 [conn3] end connection 184.173.149.242:52693 (8 connections now open)
m31102| Wed Jun 13 22:32:42 [initandlisten] connection accepted from 184.173.149.242:52749 #10 (9 connections now open)
m31102| Wed Jun 13 22:32:42 [conn10] authenticate db: local { authenticate: 1, nonce: "29e2bf64c29fd5e8", user: "__system", key: "5543892c7b596905fdcef5b80d26c420" }
m31200| Wed Jun 13 22:32:44 [rsStart] replSet load config ok from self
m31200| Wed Jun 13 22:32:44 [rsStart] replSet I am tp2.10gen.cc:31200
m31200| Wed Jun 13 22:32:44 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31202
m31200| Wed Jun 13 22:32:44 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31201
m31200| Wed Jun 13 22:32:44 [rsStart] replSet STARTUP2
m31200| Wed Jun 13 22:32:44 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is up
m31200| Wed Jun 13 22:32:44 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is up
m31200| Wed Jun 13 22:32:44 [rsSync] replSet SECONDARY
m31201| Wed Jun 13 22:32:44 [rsStart] trying to contact tp2.10gen.cc:31200
m31200| Wed Jun 13 22:32:44 [initandlisten] connection accepted from 184.173.149.242:41175 #3 (3 connections now open)
m31200| Wed Jun 13 22:32:44 [conn3] authenticate db: local { authenticate: 1, nonce: "bd408972f454b3d1", user: "__system", key: "b5ec23428c333032e570b6b0c12f9213" }
m31201| Wed Jun 13 22:32:44 [rsStart] replSet load config ok from tp2.10gen.cc:31200
m31201| Wed Jun 13 22:32:44 [rsStart] replSet I am tp2.10gen.cc:31201
m31201| Wed Jun 13 22:32:44 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31202
m31201| Wed Jun 13 22:32:44 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31200
m31201| Wed Jun 13 22:32:44 [rsStart] replSet got config version 1 from a remote, saving locally
m31201| Wed Jun 13 22:32:44 [rsStart] replSet info saving a newer config version to local.system.replset
m31201| Wed Jun 13 22:32:44 [FileAllocator] allocating new datafile /data/db/d2-1/local.ns, filling with zeroes...
m31201| Wed Jun 13 22:32:44 [FileAllocator] creating directory /data/db/d2-1/_tmp
m31201| Wed Jun 13 22:32:44 [FileAllocator] done allocating datafile /data/db/d2-1/local.ns, size: 16MB, took 0.037 secs
m31201| Wed Jun 13 22:32:44 [FileAllocator] allocating new datafile /data/db/d2-1/local.0, filling with zeroes...
m31102| Wed Jun 13 22:32:44 [conn4] end connection 184.173.149.242:52696 (8 connections now open)
m31102| Wed Jun 13 22:32:44 [initandlisten] connection accepted from 184.173.149.242:52751 #11 (9 connections now open)
m31102| Wed Jun 13 22:32:44 [conn11] authenticate db: local { authenticate: 1, nonce: "d80966225fba8de6", user: "__system", key: "623381dd39e4920ecae1a6ec21b4ada5" }
m31201| Wed Jun 13 22:32:44 [FileAllocator] done allocating datafile /data/db/d2-1/local.0, size: 16MB, took 0.034 secs
m31201| Wed Jun 13 22:32:44 [rsStart] datafileheader::init initializing /data/db/d2-1/local.0 n:0
m31201| Wed Jun 13 22:32:44 [rsStart] replSet saveConfigLocally done
m31201| Wed Jun 13 22:32:44 [rsStart] replSet STARTUP2
m31201| Wed Jun 13 22:32:44 [rsSync] ******
m31201| Wed Jun 13 22:32:44 [rsSync] creating replication oplog of size: 40MB...
m31201| Wed Jun 13 22:32:44 [FileAllocator] allocating new datafile /data/db/d2-1/local.1, filling with zeroes...
m31202| Wed Jun 13 22:32:44 [rsStart] trying to contact tp2.10gen.cc:31200
m31200| Wed Jun 13 22:32:44 [initandlisten] connection accepted from 184.173.149.242:41177 #4 (4 connections now open)
m31200| Wed Jun 13 22:32:44 [conn4] authenticate db: local { authenticate: 1, nonce: "38c2020500a66475", user: "__system", key: "e2be152e10538cc9959035f11770d35f" }
m31202| Wed Jun 13 22:32:44 [rsStart] replSet load config ok from tp2.10gen.cc:31200
m31202| Wed Jun 13 22:32:44 [rsStart] replSet I am tp2.10gen.cc:31202
m31202| Wed Jun 13 22:32:44 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31201
m31202| Wed Jun 13 22:32:44 [rsStart] starting rsHealthPoll for tp2.10gen.cc:31200
m31202| Wed Jun 13 22:32:44 [rsStart] replSet got config version 1 from a remote, saving locally
m31202| Wed Jun 13 22:32:44 [rsStart] replSet info saving a newer config version to local.system.replset
m31202| Wed Jun 13 22:32:44 [FileAllocator] allocating new datafile /data/db/d2-2/local.ns, filling with zeroes...
m31202| Wed Jun 13 22:32:44 [FileAllocator] creating directory /data/db/d2-2/_tmp
m31201| Wed Jun 13 22:32:44 [FileAllocator] done allocating datafile /data/db/d2-1/local.1, size: 64MB, took 0.131 secs
m31201| Wed Jun 13 22:32:44 [rsSync] datafileheader::init initializing /data/db/d2-1/local.1 n:1
m31101| Wed Jun 13 22:32:44 [conn4] end connection 184.173.149.242:56539 (9 connections now open)
m31101| Wed Jun 13 22:32:44 [initandlisten] connection accepted from 184.173.149.242:56595 #12 (10 connections now open)
m31101| Wed Jun 13 22:32:44 [conn12] authenticate db: local { authenticate: 1, nonce: "948e881da7f5f184", user: "__system", key: "b88f77788bdc7768b8c5a8f7b4e5666e" }
m31201| Wed Jun 13 22:32:44 [rsSync] ******
m31201| Wed Jun 13 22:32:44 [rsSync] replSet initial sync pending
m31201| Wed Jun 13 22:32:44 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31202| Wed Jun 13 22:32:44 [FileAllocator] done allocating datafile /data/db/d2-2/local.ns, size: 16MB, took 0.039 secs
m31202| Wed Jun 13 22:32:44 [FileAllocator] allocating new datafile /data/db/d2-2/local.0, filling with zeroes...
m31202| Wed Jun 13 22:32:44 [FileAllocator] done allocating datafile /data/db/d2-2/local.0, size: 16MB, took 0.035 secs
m31202| Wed Jun 13 22:32:44 [rsStart] datafileheader::init initializing /data/db/d2-2/local.0 n:0
m31202| Wed Jun 13 22:32:44 [rsStart] replSet saveConfigLocally done
m31202| Wed Jun 13 22:32:44 [rsStart] replSet STARTUP2
m31202| Wed Jun 13 22:32:44 [rsSync] ******
m31202| Wed Jun 13 22:32:44 [rsSync] creating replication oplog of size: 40MB...
m31202| Wed Jun 13 22:32:44 [FileAllocator] allocating new datafile /data/db/d2-2/local.1, filling with zeroes...
m31202| Wed Jun 13 22:32:45 [FileAllocator] done allocating datafile /data/db/d2-2/local.1, size: 64MB, took 0.291 secs
m31202| Wed Jun 13 22:32:45 [rsSync] datafileheader::init initializing /data/db/d2-2/local.1 n:1
m31202| Wed Jun 13 22:32:45 [rsSync] ******
m31202| Wed Jun 13 22:32:45 [rsSync] replSet initial sync pending
m31202| Wed Jun 13 22:32:45 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
m31200| Wed Jun 13 22:32:46 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state STARTUP2
m31200| Wed Jun 13 22:32:46 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state STARTUP2
m31200| Wed Jun 13 22:32:46 [rsMgr] not electing self, tp2.10gen.cc:31202 would veto
m31200| Wed Jun 13 22:32:46 [rsMgr] not electing self, tp2.10gen.cc:31202 would veto
m31201| Wed Jun 13 22:32:46 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is up
m31201| Wed Jun 13 22:32:46 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state SECONDARY
m31202| Wed Jun 13 22:32:46 [initandlisten] connection accepted from 184.173.149.242:42943 #4 (4 connections now open)
m31202| Wed Jun 13 22:32:46 [conn4] authenticate db: local { authenticate: 1, nonce: "350c4181d9c3b6eb", user: "__system", key: "85661aa6584ae23e4aa9a46147862eaf" }
m31201| Wed Jun 13 22:32:46 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is up
m31201| Wed Jun 13 22:32:46 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state STARTUP2
m31202| Wed Jun 13 22:32:46 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is up
m31201| Wed Jun 13 22:32:46 [initandlisten] connection accepted from 184.173.149.242:59501 #4 (4 connections now open)
m31202| Wed Jun 13 22:32:46 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state SECONDARY
m31201| Wed Jun 13 22:32:46 [conn4] authenticate db: local { authenticate: 1, nonce: "99e1cc9587fc6473", user: "__system", key: "f177d5874ca64db8e50d67e5ac9cb995" }
m31202| Wed Jun 13 22:32:46 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is up
m31202| Wed Jun 13 22:32:46 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state STARTUP2
m30999| Wed Jun 13 22:32:52 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' acquired, ts : 4fd95b6493454f4c315250fd
m30999| Wed Jun 13 22:32:52 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' unlocked.
m31000| Wed Jun 13 22:32:52 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' acquired, ts : 4fd95b64e24b46bcab13cf49
m31000| Wed Jun 13 22:32:52 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' unlocked.
m31200| Wed Jun 13 22:32:52 [rsMgr] replSet info electSelf 0
m31201| Wed Jun 13 22:32:52 [conn3] replSet received elect msg { replSetElect: 1, set: "d2", who: "tp2.10gen.cc:31200", whoid: 0, cfgver: 1, round: ObjectId('4fd95b64b84ac0176ed7bcca') }
m31202| Wed Jun 13 22:32:52 [conn3] replSet received elect msg { replSetElect: 1, set: "d2", who: "tp2.10gen.cc:31200", whoid: 0, cfgver: 1, round: ObjectId('4fd95b64b84ac0176ed7bcca') }
m31202| Wed Jun 13 22:32:52 [conn3] replSet RECOVERING
m31202| Wed Jun 13 22:32:52 [conn3] replSet info voting yea for tp2.10gen.cc:31200 (0)
m31201| Wed Jun 13 22:32:52 [conn3] replSet RECOVERING
m31201| Wed Jun 13 22:32:52 [conn3] replSet info voting yea for tp2.10gen.cc:31200 (0)
m31200| Wed Jun 13 22:32:52 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95b64b84ac0176ed7bcca'), ok: 1.0 }
m31200| Wed Jun 13 22:32:52 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95b64b84ac0176ed7bcca'), ok: 1.0 }
m31200| Wed Jun 13 22:32:52 [rsMgr] info PageFaultRetryableSection will not yield, already locked upon reaching
m31200| Wed Jun 13 22:32:52 [rsMgr] replSet PRIMARY
m31201| Wed Jun 13 22:32:52 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state PRIMARY
m31201| Wed Jun 13 22:32:52 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state RECOVERING
m31202| Wed Jun 13 22:32:52 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state PRIMARY
m31202| Wed Jun 13 22:32:52 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state RECOVERING
adding shard d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202
m31000| Wed Jun 13 22:32:53 [conn] authenticate db: admin { authenticate: 1.0, user: "foo", nonce: "5526c93db0d6a9b1", key: "5b8082517072a748b9f0445e21d4ee8b" }
{ "dbname" : "admin", "user" : "foo", "readOnly" : false, "ok" : 1 }
logged in
m31000| Wed Jun 13 22:32:53 [conn] starting new replica set monitor for replica set d2 with seed of tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202
m31200| Wed Jun 13 22:32:53 [initandlisten] connection accepted from 184.173.149.242:41181 #5 (5 connections now open)
m31000| Wed Jun 13 22:32:53 [conn] successfully connected to seed tp2.10gen.cc:31200 for replica set d2
m31000| Wed Jun 13 22:32:53 [conn] changing hosts to { 0: "tp2.10gen.cc:31200", 1: "tp2.10gen.cc:31202", 2: "tp2.10gen.cc:31201" } from d2/
m31000| Wed Jun 13 22:32:53 [conn] trying to add new host tp2.10gen.cc:31200 to replica set d2
m31200| Wed Jun 13 22:32:53 [initandlisten] connection accepted from 184.173.149.242:41182 #6 (6 connections now open)
m31000| Wed Jun 13 22:32:53 [conn] successfully connected to new host tp2.10gen.cc:31200 in replica set d2
m31000| Wed Jun 13 22:32:53 [conn] trying to add new host tp2.10gen.cc:31201 to replica set d2
m31000| Wed Jun 13 22:32:53 [conn] successfully connected to new host tp2.10gen.cc:31201 in replica set d2
m31000| Wed Jun 13 22:32:53 [conn] trying to add new host tp2.10gen.cc:31202 to replica set d2
m31201| Wed Jun 13 22:32:53 [initandlisten] connection accepted from 184.173.149.242:59504 #5 (5 connections now open)
m31202| Wed Jun 13 22:32:53 [initandlisten] connection accepted from 184.173.149.242:42948 #5 (5 connections now open)
m31000| Wed Jun 13 22:32:53 [conn] successfully connected to new host tp2.10gen.cc:31202 in replica set d2
m31200| Wed Jun 13 22:32:53 [initandlisten] connection accepted from 184.173.149.242:41185 #7 (7 connections now open)
m31200| Wed Jun 13 22:32:53 [conn7] authenticate db: local { authenticate: 1, nonce: "99eda8799193c866", user: "__system", key: "c2c117887d860a237f9489b316c6d23e" }
m31200| Wed Jun 13 22:32:53 [conn5] end connection 184.173.149.242:41181 (6 connections now open)
m31000| Wed Jun 13 22:32:53 [conn] Primary for replica set d2 changed to tp2.10gen.cc:31200
m31201| Wed Jun 13 22:32:53 [initandlisten] connection accepted from 184.173.149.242:59507 #6 (6 connections now open)
m31201| Wed Jun 13 22:32:53 [conn6] authenticate db: local { authenticate: 1, nonce: "207d1bead5673829", user: "__system", key: "b484ffc916754af59daba7c44b31ca33" }
m31202| Wed Jun 13 22:32:53 [initandlisten] connection accepted from 184.173.149.242:42951 #6 (6 connections now open)
m31202| Wed Jun 13 22:32:53 [conn6] authenticate db: local { authenticate: 1, nonce: "ad222466434f0036", user: "__system", key: "96113993758a04d56328d6ef432a9f3d" }
m31000| Wed Jun 13 22:32:53 [conn] replica set monitor for replica set d2 started, address is d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202
m31200| Wed Jun 13 22:32:53 [initandlisten] connection accepted from 184.173.149.242:41188 #8 (7 connections now open)
m31200| Wed Jun 13 22:32:53 [conn8] authenticate db: local { authenticate: 1, nonce: "c2007f8ed366891e", user: "__system", key: "80cd79eb954c3024a5e7dc5c34dd2758" }
m31000| Wed Jun 13 22:32:53 [conn] going to add shard: { _id: "d2", host: "d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202" }
m31000| range.universal(): 1
m31100| Wed Jun 13 22:32:53 [conn8] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey }
m31100| Wed Jun 13 22:32:53 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : MaxKey }
m29000| Wed Jun 13 22:32:53 [initandlisten] connection accepted from 184.173.149.242:55550 #14 (14 connections now open)
m29000| Wed Jun 13 22:32:53 [conn14] authenticate db: local { authenticate: 1, nonce: "d49dbd97cc11ae66", user: "__system", key: "dc77e6fb90888ec7f65dccdf3c476572" }
m31100| Wed Jun 13 22:32:53 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 0.0 } ], shardId: "test.foo-x_MinKey", configdb: "tp2.10gen.cc:29000" }
m31100| Wed Jun 13 22:32:53 [conn8] created new distributed lock for test.foo on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Wed Jun 13 22:32:53 [LockPinger] creating distributed lock ping thread for tp2.10gen.cc:29000 and process tp2.10gen.cc:31100:1339644773:1331256593 (sleeping for 30000ms)
m31100| Wed Jun 13 22:32:53 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' acquired, ts : 4fd95b658529b0c3a8f3d66a
m31100| Wed Jun 13 22:32:53 [conn8] splitChunk accepted at version 1|0||4fd95b47e24b46bcab13cf46
m29000| Wed Jun 13 22:32:53 [conn14] info PageFaultRetryableSection will not yield, already locked upon reaching
m31100| Wed Jun 13 22:32:53 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:32:53-0", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:42838", time: new Date(1339644773216), what: "split", ns: "test.foo", details: { before: { min: { x: MinKey }, max: { x: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: MinKey }, max: { x: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') }, right: { min: { x: 0.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') } } }
m31100| Wed Jun 13 22:32:53 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' unlocked.
m31000| Wed Jun 13 22:32:53 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||4fd95b47e24b46bcab13cf46 based on: 1|0||4fd95b47e24b46bcab13cf46
m31000| Wed Jun 13 22:32:53 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102 lastmod: 1|0||000000000000000000000000 min: { x: MinKey } max: { x: MaxKey } on: { x: 0.0 } (splitThreshold 921) size: 1200
m31200| Wed Jun 13 22:32:54 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state RECOVERING
m31200| Wed Jun 13 22:32:54 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state RECOVERING
m31100| Wed Jun 13 22:32:54 [conn8] request split points lookup for chunk test.foo { : 0.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:32:54 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 0.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:32:54 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 0.0 }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 5850.0 } ], shardId: "test.foo-x_0.0", configdb: "tp2.10gen.cc:29000" }
m31100| Wed Jun 13 22:32:54 [conn8] created new distributed lock for test.foo on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Wed Jun 13 22:32:54 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' acquired, ts : 4fd95b668529b0c3a8f3d66b
m31100| Wed Jun 13 22:32:54 [conn8] splitChunk accepted at version 1|2||4fd95b47e24b46bcab13cf46
m31100| Wed Jun 13 22:32:54 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:32:54-1", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:42838", time: new Date(1339644774510), what: "split", ns: "test.foo", details: { before: { min: { x: 0.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 0.0 }, max: { x: 5850.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') }, right: { min: { x: 5850.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') } } }
m31100| Wed Jun 13 22:32:54 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' unlocked.
m31000| Wed Jun 13 22:32:54 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 4 version: 1|4||4fd95b47e24b46bcab13cf46 based on: 1|2||4fd95b47e24b46bcab13cf46
m31000| Wed Jun 13 22:32:54 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102 lastmod: 1|2||000000000000000000000000 min: { x: 0.0 } max: { x: MaxKey } on: { x: 5850.0 } (splitThreshold 471859) size: 585100 (migrate suggested)
m31000| Wed Jun 13 22:32:54 [conn] moving chunk (auto): ns:test.foo at: d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102 lastmod: 1|4||000000000000000000000000 min: { x: 5850.0 } max: { x: MaxKey } to: d2:d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202
m31000| Wed Jun 13 22:32:54 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102 lastmod: 1|4||000000000000000000000000 min: { x: 5850.0 } max: { x: MaxKey }) d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102 -> d2:d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202
m31100| Wed Jun 13 22:32:54 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102", to: "d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202", fromShard: "d1", toShard: "d2", min: { x: 5850.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_5850.0", configdb: "tp2.10gen.cc:29000" }
m31100| Wed Jun 13 22:32:54 [conn8] created new distributed lock for test.foo on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Wed Jun 13 22:32:54 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' acquired, ts : 4fd95b668529b0c3a8f3d66c
m31100| Wed Jun 13 22:32:54 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:32:54-2", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:42838", time: new Date(1339644774559), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 5850.0 }, max: { x: MaxKey }, from: "d1", to: "d2" } }
m31100| Wed Jun 13 22:32:54 [conn8] moveChunk request accepted at version 1|4||4fd95b47e24b46bcab13cf46
m31100| Wed Jun 13 22:32:54 [conn8] moveChunk number of documents: 1
m31100| Wed Jun 13 22:32:54 [conn8] starting new replica set monitor for replica set d2 with seed of tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202
m31200| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:41190 #9 (8 connections now open)
m31100| Wed Jun 13 22:32:54 [conn8] successfully connected to seed tp2.10gen.cc:31200 for replica set d2
m31100| Wed Jun 13 22:32:54 [conn8] changing hosts to { 0: "tp2.10gen.cc:31200", 1: "tp2.10gen.cc:31202", 2: "tp2.10gen.cc:31201" } from d2/
m31100| Wed Jun 13 22:32:54 [conn8] trying to add new host tp2.10gen.cc:31200 to replica set d2
m31100| Wed Jun 13 22:32:54 [conn8] successfully connected to new host tp2.10gen.cc:31200 in replica set d2
m31100| Wed Jun 13 22:32:54 [conn8] trying to add new host tp2.10gen.cc:31201 to replica set d2
m31200| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:41191 #10 (9 connections now open)
m31201| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:59513 #7 (7 connections now open)
m31100| Wed Jun 13 22:32:54 [conn8] successfully connected to new host tp2.10gen.cc:31201 in replica set d2
m31100| Wed Jun 13 22:32:54 [conn8] trying to add new host tp2.10gen.cc:31202 to replica set d2
m31202| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:42957 #7 (7 connections now open)
m31100| Wed Jun 13 22:32:54 [conn8] successfully connected to new host tp2.10gen.cc:31202 in replica set d2
m31200| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:41194 #11 (10 connections now open)
m31200| Wed Jun 13 22:32:54 [conn11] authenticate db: local { authenticate: 1, nonce: "83da365d8d68c63f", user: "__system", key: "870c8062b81937b86fc579ab087010aa" }
m31200| Wed Jun 13 22:32:54 [conn9] end connection 184.173.149.242:41190 (9 connections now open)
m31100| Wed Jun 13 22:32:54 [conn8] Primary for replica set d2 changed to tp2.10gen.cc:31200
m31201| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:59516 #8 (8 connections now open)
m31201| Wed Jun 13 22:32:54 [conn8] authenticate db: local { authenticate: 1, nonce: "702c14c0c9d03745", user: "__system", key: "4bc2ed67380e3a3a9a33fb8fb414b1ce" }
m31202| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:42960 #8 (8 connections now open)
m31202| Wed Jun 13 22:32:54 [conn8] authenticate db: local { authenticate: 1, nonce: "87654287a6067a8a", user: "__system", key: "78c85a856e6e778bbedbe94a810b221f" }
m31100| Wed Jun 13 22:32:54 [conn8] replica set monitor for replica set d2 started, address is d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202
m31100| Wed Jun 13 22:32:54 [ReplicaSetMonitorWatcher] starting
m31200| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:41197 #12 (10 connections now open)
m31200| Wed Jun 13 22:32:54 [conn12] authenticate db: local { authenticate: 1, nonce: "acb600de26ab1cae", user: "__system", key: "a8833ed437fd8a2c218cf10eab524011" }
m31200| Wed Jun 13 22:32:54 [migrateThread] starting new replica set monitor for replica set d1 with seed of tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31100| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:42906 #32 (18 connections now open)
m31200| Wed Jun 13 22:32:54 [migrateThread] successfully connected to seed tp2.10gen.cc:31100 for replica set d1
m31200| Wed Jun 13 22:32:54 [migrateThread] changing hosts to { 0: "tp2.10gen.cc:31100", 1: "tp2.10gen.cc:31102", 2: "tp2.10gen.cc:31101" } from d1/
m31200| Wed Jun 13 22:32:54 [migrateThread] trying to add new host tp2.10gen.cc:31100 to replica set d1
m31200| Wed Jun 13 22:32:54 [migrateThread] successfully connected to new host tp2.10gen.cc:31100 in replica set d1
m31200| Wed Jun 13 22:32:54 [migrateThread] trying to add new host tp2.10gen.cc:31101 to replica set d1
m31100| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:42907 #33 (19 connections now open)
m31200| Wed Jun 13 22:32:54 [migrateThread] successfully connected to new host tp2.10gen.cc:31101 in replica set d1
m31101| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:56617 #13 (11 connections now open)
m31200| Wed Jun 13 22:32:54 [migrateThread] trying to add new host tp2.10gen.cc:31102 to replica set d1
m31102| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:52776 #12 (10 connections now open)
m31200| Wed Jun 13 22:32:54 [migrateThread] successfully connected to new host tp2.10gen.cc:31102 in replica set d1
m31100| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:42910 #34 (20 connections now open)
m31100| Wed Jun 13 22:32:54 [conn34] authenticate db: local { authenticate: 1, nonce: "711da1aa670ea052", user: "__system", key: "f8f30e71e54db64c1ca23941d0607cae" }
m31100| Wed Jun 13 22:32:54 [conn32] end connection 184.173.149.242:42906 (19 connections now open)
m31200| Wed Jun 13 22:32:54 [migrateThread] Primary for replica set d1 changed to tp2.10gen.cc:31100
m31101| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:56620 #14 (12 connections now open)
m31101| Wed Jun 13 22:32:54 [conn14] authenticate db: local { authenticate: 1, nonce: "894d2ead2e63b991", user: "__system", key: "06a39991776df1d3a28f6180a06898c5" }
m31102| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:52779 #13 (11 connections now open)
m31102| Wed Jun 13 22:32:54 [conn13] authenticate db: local { authenticate: 1, nonce: "8ec7bbd0358a52e2", user: "__system", key: "49c964fbb9047baf14ba06132426eea3" }
m31200| Wed Jun 13 22:32:54 [migrateThread] replica set monitor for replica set d1 started, address is d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31200| Wed Jun 13 22:32:54 [ReplicaSetMonitorWatcher] starting
m31100| Wed Jun 13 22:32:54 [initandlisten] connection accepted from 184.173.149.242:42913 #35 (20 connections now open)
m31100| Wed Jun 13 22:32:54 [conn35] authenticate db: local { authenticate: 1, nonce: "7e941d31238954b9", user: "__system", key: "e1c2ad02172e7b74c4fa6bab556b0380" }
m31200| Wed Jun 13 22:32:54 [migrateThread] opening db: test
m31200| Wed Jun 13 22:32:54 [FileAllocator] allocating new datafile /data/db/d2-0/test.ns, filling with zeroes...
m31200| Wed Jun 13 22:32:54 [FileAllocator] done allocating datafile /data/db/d2-0/test.ns, size: 16MB, took 0.039 secs
m31200| Wed Jun 13 22:32:54 [FileAllocator] allocating new datafile /data/db/d2-0/test.0, filling with zeroes...
m31200| Wed Jun 13 22:32:54 [FileAllocator] done allocating datafile /data/db/d2-0/test.0, size: 16MB, took 0.035 secs
m31200| Wed Jun 13 22:32:54 [migrateThread] datafileheader::init initializing /data/db/d2-0/test.0 n:0
m31200| Wed Jun 13 22:32:54 [migrateThread] build index test.foo { _id: 1 }
m31200| Wed Jun 13 22:32:54 [migrateThread] build index done. scanned 0 total records. 0.015 secs
m31200| Wed Jun 13 22:32:54 [migrateThread] info: creating collection test.foo on add index
m31200| Wed Jun 13 22:32:54 [migrateThread] build index test.foo { x: 1.0 }
m31200| Wed Jun 13 22:32:54 [migrateThread] build index done. scanned 0 total records. 0 secs
m31200| Wed Jun 13 22:32:54 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 5850.0 } -> { x: MaxKey }
m31100| Wed Jun 13 22:32:55 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102", min: { x: 5850.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 97, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m31100| Wed Jun 13 22:32:55 [conn8] moveChunk setting version to: 2|0||4fd95b47e24b46bcab13cf46
m31200| Wed Jun 13 22:32:55 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 5850.0 } -> { x: MaxKey }
m31200| Wed Jun 13 22:32:55 [migrateThread] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:32:55-0", server: "tp2.10gen.cc", clientAddr: ":27017", time: new Date(1339644775578), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 5850.0 }, max: { x: MaxKey }, step1 of 5: 107, step2 of 5: 0, step3 of 5: 1, step4 of 5: 0, step5 of 5: 901 } }
m29000| Wed Jun 13 22:32:55 [initandlisten] connection accepted from 184.173.149.242:55567 #15 (15 connections now open)
m31100| Wed Jun 13 22:32:55 [conn8] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102", min: { x: 5850.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 97, catchup: 0, steady: 0 }, ok: 1.0 }
m31100| Wed Jun 13 22:32:55 [conn8] moveChunk updating self version to: 2|1||4fd95b47e24b46bcab13cf46 through { x: MinKey } -> { x: 0.0 } for collection 'test.foo'
m29000| Wed Jun 13 22:32:55 [conn15] authenticate db: local { authenticate: 1, nonce: "59a2ccdcef6ad6fb", user: "__system", key: "ad2383aff7a7f59db5b0eebb6d689cfb" }
m31100| Wed Jun 13 22:32:55 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:32:55-3", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:42838", time: new Date(1339644775581), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 5850.0 }, max: { x: MaxKey }, from: "d1", to: "d2" } }
m31100| Wed Jun 13 22:32:55 [conn8] doing delete inline
m31100| Wed Jun 13 22:32:55 [conn8] moveChunk deleted: 1
m31100| Wed Jun 13 22:32:57 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' unlocked.
m31100| Wed Jun 13 22:32:57 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:32:57-4", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:42838", time: new Date(1339644777606), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 5850.0 }, max: { x: MaxKey }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 8, step4 of 6: 1000, step5 of 6: 13, step6 of 6: 2001 } }
m31100| Wed Jun 13 22:32:57 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102", to: "d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202", fromShard: "d1", toShard: "d2", min: { x: 5850.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_5850.0", configdb: "tp2.10gen.cc:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 W:133 r:55631 w:84437 reslen:37 3050ms
m31000| Wed Jun 13 22:32:57 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 2|1||4fd95b47e24b46bcab13cf46 based on: 1|4||4fd95b47e24b46bcab13cf46
m31200| Wed Jun 13 22:32:57 [initandlisten] connection accepted from 184.173.149.242:41207 #13 (11 connections now open)
m31200| Wed Jun 13 22:32:57 [conn13] authenticate db: local { authenticate: 1, nonce: "f62f2c52db818ac5", user: "__system", key: "6a288c7377be4ff2dcf4ea7fe56b44c3" }
m31000| Wed Jun 13 22:32:57 [conn] creating WriteBackListener for: tp2.10gen.cc:31200 serverID: 4fd95b1ee24b46bcab13cf40
m31000| Wed Jun 13 22:32:57 [conn] creating WriteBackListener for: tp2.10gen.cc:31201 serverID: 4fd95b1ee24b46bcab13cf40
m31000| Wed Jun 13 22:32:57 [conn] creating WriteBackListener for: tp2.10gen.cc:31202 serverID: 4fd95b1ee24b46bcab13cf40
m31200| Wed Jun 13 22:32:57 [conn13] no current chunk manager found for this shard, will initialize
m31201| Wed Jun 13 22:32:58 [conn3] end connection 184.173.149.242:59493 (7 connections now open)
m31201| Wed Jun 13 22:32:58 [initandlisten] connection accepted from 184.173.149.242:59529 #9 (8 connections now open)
m31201| Wed Jun 13 22:32:58 [conn9] authenticate db: local { authenticate: 1, nonce: "f230bc5e82801046", user: "__system", key: "aba950447dc0b8dbd5f272bdc822713d" }
m31101| Wed Jun 13 22:32:58 [conn7] end connection 184.173.149.242:56550 (11 connections now open)
m31101| Wed Jun 13 22:32:58 [initandlisten] connection accepted from 184.173.149.242:56626 #15 (12 connections now open)
m31101| Wed Jun 13 22:32:58 [conn15] authenticate db: local { authenticate: 1, nonce: "e1a92d20f91019ec", user: "__system", key: "db9b2960def677aa2a6f8630962e4bb3" }
m31200| Wed Jun 13 22:32:59 [conn8] request split points lookup for chunk test.foo { : 5850.0 } -->> { : MaxKey }
m31200| Wed Jun 13 22:32:59 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 5850.0 } -->> { : MaxKey }
m29000| Wed Jun 13 22:32:59 [initandlisten] connection accepted from 184.173.149.242:55571 #16 (16 connections now open)
m29000| Wed Jun 13 22:32:59 [conn16] authenticate db: local { authenticate: 1, nonce: "4dc4e20d9b3674c1", user: "__system", key: "3d24a5794541b76a7ffb40e32cc8481b" }
m31200| Wed Jun 13 22:32:59 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 5850.0 }, max: { x: MaxKey }, from: "d2", splitKeys: [ { x: 16732.0 } ], shardId: "test.foo-x_5850.0", configdb: "tp2.10gen.cc:29000" }
m31200| Wed Jun 13 22:32:59 [conn8] created new distributed lock for test.foo on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31200| Wed Jun 13 22:32:59 [LockPinger] creating distributed lock ping thread for tp2.10gen.cc:29000 and process tp2.10gen.cc:31200:1339644779:563366256 (sleeping for 30000ms)
m31200| Wed Jun 13 22:32:59 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31200:1339644779:563366256' acquired, ts : 4fd95b6bb84ac0176ed7bccb
m31200| Wed Jun 13 22:32:59 [conn8] splitChunk accepted at version 2|0||4fd95b47e24b46bcab13cf46
m31200| Wed Jun 13 22:32:59 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:32:59-1", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:41188", time: new Date(1339644779282), what: "split", ns: "test.foo", details: { before: { min: { x: 5850.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 5850.0 }, max: { x: 16732.0 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') }, right: { min: { x: 16732.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') } } }
m31200| Wed Jun 13 22:32:59 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31200:1339644779:563366256' unlocked.
m31000| Wed Jun 13 22:32:59 [conn] ChunkManager: time to load chunks for test.foo: 24ms sequenceNumber: 6 version: 2|3||4fd95b47e24b46bcab13cf46 based on: 2|1||4fd95b47e24b46bcab13cf46
m31000| Wed Jun 13 22:32:59 [conn] autosplitted test.foo shard: ns:test.foo at: d2:d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202 lastmod: 2|0||000000000000000000000000 min: { x: 5850.0 } max: { x: MaxKey } on: { x: 16732.0 } (splitThreshold 943718) size: 1048600 (migrate suggested)
m31200| Wed Jun 13 22:32:59 [conn8] request split points lookup for chunk test.foo { : 16732.0 } -->> { : MaxKey }
m31200| Wed Jun 13 22:32:59 [conn8] request split points lookup for chunk test.foo { : 16732.0 } -->> { : MaxKey }
m31200| Wed Jun 13 22:33:00 [conn8] request split points lookup for chunk test.foo { : 16732.0 } -->> { : MaxKey }
m31200| Wed Jun 13 22:33:00 [conn8] request split points lookup for chunk test.foo { : 16732.0 } -->> { : MaxKey }
m31200| Wed Jun 13 22:33:00 [conn3] end connection 184.173.149.242:41175 (10 connections now open)
m31200| Wed Jun 13 22:33:00 [initandlisten] connection accepted from 184.173.149.242:41211 #14 (11 connections now open)
m31200| Wed Jun 13 22:33:00 [conn14] authenticate db: local { authenticate: 1, nonce: "3bb8a03bc757dfe6", user: "__system", key: "2ecab0fd40411900ca7b762ac894bb87" }
m31100| Wed Jun 13 22:33:00 [conn10] end connection 184.173.149.242:42842 (19 connections now open)
m31100| Wed Jun 13 22:33:00 [initandlisten] connection accepted from 184.173.149.242:42920 #36 (20 connections now open)
m31100| Wed Jun 13 22:33:00 [conn36] authenticate db: local { authenticate: 1, nonce: "c99926d17313f708", user: "__system", key: "92e6511731d3d4759ea4a1e658469512" }
m31200| Wed Jun 13 22:33:00 [conn4] end connection 184.173.149.242:41177 (10 connections now open)
m31200| Wed Jun 13 22:33:00 [initandlisten] connection accepted from 184.173.149.242:41213 #15 (11 connections now open)
m31200| Wed Jun 13 22:33:00 [conn15] authenticate db: local { authenticate: 1, nonce: "fdf213ebd69b5790", user: "__system", key: "b61e69908d6305ee149d9e6fc0fecb43" }
m31100| Wed Jun 13 22:33:00 [conn11] end connection 184.173.149.242:42843 (19 connections now open)
m31100| Wed Jun 13 22:33:00 [initandlisten] connection accepted from 184.173.149.242:42922 #37 (20 connections now open)
m31100| Wed Jun 13 22:33:00 [conn37] authenticate db: local { authenticate: 1, nonce: "bec7e7627b525c90", user: "__system", key: "6c76825241574851c3e7d81ad0cd8009" }
m31201| Wed Jun 13 22:33:00 [rsSync] replSet initial sync pending
m31201| Wed Jun 13 22:33:00 [rsSync] replSet syncing to: tp2.10gen.cc:31200
m31200| Wed Jun 13 22:33:00 [initandlisten] connection accepted from 184.173.149.242:41215 #16 (12 connections now open)
m31200| Wed Jun 13 22:33:00 [conn16] authenticate db: local { authenticate: 1, nonce: "776f6e00f14f9fb2", user: "__system", key: "6e0137c88439d672b22a0287478aba6f" }
m31201| Wed Jun 13 22:33:00 [rsSync] build index local.me { _id: 1 }
m31201| Wed Jun 13 22:33:00 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Wed Jun 13 22:33:00 [rsSync] replSet initial sync drop all databases
m31201| Wed Jun 13 22:33:00 [rsSync] dropAllDatabasesExceptLocal 1
m31201| Wed Jun 13 22:33:00 [rsSync] replSet initial sync clone all databases
m31201| Wed Jun 13 22:33:00 [rsSync] replSet initial sync cloning db: test
m31201| Wed Jun 13 22:33:00 [rsSync] opening db: test
m31200| Wed Jun 13 22:33:00 [initandlisten] connection accepted from 184.173.149.242:41216 #17 (13 connections now open)
m31200| Wed Jun 13 22:33:00 [conn17] authenticate db: local { authenticate: 1, nonce: "35d0986ec88c8f2f", user: "__system", key: "160376f52b1a691ec686568d567b54d7" }
m31201| Wed Jun 13 22:33:00 [FileAllocator] allocating new datafile /data/db/d2-1/test.ns, filling with zeroes...
m31201| Wed Jun 13 22:33:00 [FileAllocator] done allocating datafile /data/db/d2-1/test.ns, size: 16MB, took 0.037 secs
m31201| Wed Jun 13 22:33:00 [FileAllocator] allocating new datafile /data/db/d2-1/test.0, filling with zeroes...
m31201| Wed Jun 13 22:33:00 [FileAllocator] done allocating datafile /data/db/d2-1/test.0, size: 16MB, took 0.03 secs
m31201| Wed Jun 13 22:33:00 [rsSync] datafileheader::init initializing /data/db/d2-1/test.0 n:0
m31200| Wed Jun 13 22:33:00 [conn17] exhaust=true sending more
m31200| Wed Jun 13 22:33:00 [conn17] getmore test.foo cursorid:3020550997384438046 ntoreturn:0 keyUpdates:0 numYields: 171 locks(micros) r:108379 nreturned:20012 reslen:1941184 149ms
m31200| Wed Jun 13 22:33:00 [conn8] request split points lookup for chunk test.foo { : 16732.0 } -->> { : MaxKey }
m31202| Wed Jun 13 22:33:01 [rsSync] replSet initial sync pending
m31202| Wed Jun 13 22:33:01 [rsSync] replSet syncing to: tp2.10gen.cc:31200
m31200| Wed Jun 13 22:33:01 [initandlisten] connection accepted from 184.173.149.242:41217 #18 (14 connections now open)
m31200| Wed Jun 13 22:33:01 [conn18] authenticate db: local { authenticate: 1, nonce: "b62fb78b808cdad2", user: "__system", key: "3251af05a0d32985d95557868cd997de" }
m31202| Wed Jun 13 22:33:01 [rsSync] build index local.me { _id: 1 }
m31202| Wed Jun 13 22:33:01 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Wed Jun 13 22:33:01 [rsSync] replSet initial sync drop all databases
m31202| Wed Jun 13 22:33:01 [rsSync] dropAllDatabasesExceptLocal 1
m31202| Wed Jun 13 22:33:01 [rsSync] replSet initial sync clone all databases
m31202| Wed Jun 13 22:33:01 [rsSync] replSet initial sync cloning db: test
m31202| Wed Jun 13 22:33:01 [rsSync] opening db: test
m31200| Wed Jun 13 22:33:01 [initandlisten] connection accepted from 184.173.149.242:41218 #19 (15 connections now open)
m31200| Wed Jun 13 22:33:01 [conn19] authenticate db: local { authenticate: 1, nonce: "125381cda2f2b657", user: "__system", key: "857b6d9bf5a9581a99cba2135cf72a47" }
m31202| Wed Jun 13 22:33:01 [FileAllocator] allocating new datafile /data/db/d2-2/test.ns, filling with zeroes...
m31202| Wed Jun 13 22:33:01 [FileAllocator] done allocating datafile /data/db/d2-2/test.ns, size: 16MB, took 0.04 secs
m31202| Wed Jun 13 22:33:01 [FileAllocator] allocating new datafile /data/db/d2-2/test.0, filling with zeroes...
m31201| Wed Jun 13 22:33:01 [rsSync] build index test.foo { _id: 1 }
m31202| Wed Jun 13 22:33:01 [FileAllocator] done allocating datafile /data/db/d2-2/test.0, size: 16MB, took 0.036 secs
m31202| Wed Jun 13 22:33:01 [rsSync] datafileheader::init initializing /data/db/d2-2/test.0 n:0
m31200| Wed Jun 13 22:33:01 [conn19] exhaust=true sending more
m31200| Wed Jun 13 22:33:01 [conn19] getmore test.foo cursorid:256953066707559137 ntoreturn:0 keyUpdates:0 numYields: 188 locks(micros) r:66374 nreturned:21399 reslen:2075723 103ms
m31201| Wed Jun 13 22:33:01 [rsSync] fastBuildIndex dupsToDrop:0
m31201| Wed Jun 13 22:33:01 [rsSync] build index done. scanned 20113 total records. 0.175 secs
m31201| Wed Jun 13 22:33:01 [rsSync] replSet initial sync cloning db: admin
m31200| Wed Jun 13 22:33:01 [conn17] end connection 184.173.149.242:41216 (14 connections now open)
m31200| Wed Jun 13 22:33:01 [initandlisten] connection accepted from 184.173.149.242:41219 #20 (15 connections now open)
m31200| Wed Jun 13 22:33:01 [conn8] request split points lookup for chunk test.foo { : 16732.0 } -->> { : MaxKey }
m31200| Wed Jun 13 22:33:01 [conn20] authenticate db: local { authenticate: 1, nonce: "fdf80b391eb7718e", user: "__system", key: "b3c8294d3d422e92ce6f81ceb1739db9" }
m31201| Wed Jun 13 22:33:01 [rsSync] replSet initial sync data copy, starting syncup
m31200| Wed Jun 13 22:33:01 [conn20] end connection 184.173.149.242:41219 (14 connections now open)
m31200| Wed Jun 13 22:33:01 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 16732.0 } -->> { : MaxKey }
m31200| Wed Jun 13 22:33:01 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 16732.0 }, max: { x: MaxKey }, from: "d2", splitKeys: [ { x: 27969.0 } ], shardId: "test.foo-x_16732.0", configdb: "tp2.10gen.cc:29000" }
m31200| Wed Jun 13 22:33:01 [conn8] created new distributed lock for test.foo on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31200| Wed Jun 13 22:33:01 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31200:1339644779:563366256' acquired, ts : 4fd95b6db84ac0176ed7bccc
m31200| Wed Jun 13 22:33:01 [conn8] splitChunk accepted at version 2|3||4fd95b47e24b46bcab13cf46
m31200| Wed Jun 13 22:33:01 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:33:01-2", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:41188", time: new Date(1339644781550), what: "split", ns: "test.foo", details: { before: { min: { x: 16732.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 16732.0 }, max: { x: 27969.0 }, lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') }, right: { min: { x: 27969.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') } } }
m31200| Wed Jun 13 22:33:01 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31200:1339644779:563366256' unlocked.
m31000| Wed Jun 13 22:33:01 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 7 version: 2|5||4fd95b47e24b46bcab13cf46 based on: 2|3||4fd95b47e24b46bcab13cf46
m31201| Wed Jun 13 22:33:01 [rsBackgroundSync] replSet syncing to: tp2.10gen.cc:31200
m31200| Wed Jun 13 22:33:01 [initandlisten] connection accepted from 184.173.149.242:41220 #21 (15 connections now open)
m31200| Wed Jun 13 22:33:01 [conn21] authenticate db: local { authenticate: 1, nonce: "829089d1c1bddeb2", user: "__system", key: "a0b448ece70c9400898e29ba76fccb58" }
m31201| Wed Jun 13 22:33:01 [rsBackgroundSync] replSet remoteOldestOp: Jun 13 22:32:35 4fd95b53:1
m31201| Wed Jun 13 22:33:01 [rsBackgroundSync] replSet lastOpTimeFetched: Jun 13 22:33:00 4fd95b6c:1290
m31202| Wed Jun 13 22:33:01 [rsSync] build index test.foo { _id: 1 }
m31000| Wed Jun 13 22:33:01 [conn] autosplitted test.foo shard: ns:test.foo at: d2:d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202 lastmod: 2|3||000000000000000000000000 min: { x: 16732.0 } max: { x: MaxKey } on: { x: 27969.0 } (splitThreshold 943718) size: 1048600 (migrate suggested)
m31200| Wed Jun 13 22:33:01 [conn8] request split points lookup for chunk test.foo { : 27969.0 } -->> { : MaxKey }
m31202| Wed Jun 13 22:33:01 [rsSync] fastBuildIndex dupsToDrop:0
m31202| Wed Jun 13 22:33:01 [rsSync] build index done. scanned 21500 total records. 0.192 secs
m31200| Wed Jun 13 22:33:01 [conn19] end connection 184.173.149.242:41218 (14 connections now open)
m31202| Wed Jun 13 22:33:01 [rsSync] replSet initial sync cloning db: admin
m31200| Wed Jun 13 22:33:01 [initandlisten] connection accepted from 184.173.149.242:41221 #22 (15 connections now open)
m31200| Wed Jun 13 22:33:01 [conn22] authenticate db: local { authenticate: 1, nonce: "d5e70238af0d8ffc", user: "__system", key: "4ec286bf08b0be7b8a3d2bc08baf2fbc" }
m31202| Wed Jun 13 22:33:01 [rsSync] replSet initial sync data copy, starting syncup
m31200| Wed Jun 13 22:33:01 [conn22] end connection 184.173.149.242:41221 (14 connections now open)
m31202| Wed Jun 13 22:33:01 [rsBackgroundSync] replSet syncing to: tp2.10gen.cc:31200
m31200| Wed Jun 13 22:33:01 [initandlisten] connection accepted from 184.173.149.242:41222 #23 (15 connections now open)
m31200| Wed Jun 13 22:33:01 [conn23] authenticate db: local { authenticate: 1, nonce: "fcc464e077fa0f69", user: "__system", key: "255f2e972f59acfae2276167186c08e4" }
m31202| Wed Jun 13 22:33:01 [rsBackgroundSync] replSet remoteOldestOp: Jun 13 22:32:35 4fd95b53:1
m31202| Wed Jun 13 22:33:01 [rsBackgroundSync] replSet lastOpTimeFetched: Jun 13 22:33:01 4fd95b6d:28c
m31201| Wed Jun 13 22:33:01 [rsSync] replSet initial sync building indexes
m31201| Wed Jun 13 22:33:01 [rsSyncNotifier] replset setting oplog notifier to tp2.10gen.cc:31200
m31201| Wed Jun 13 22:33:01 [rsSync] replSet initial sync cloning indexes for : test
m31200| Wed Jun 13 22:33:01 [initandlisten] connection accepted from 184.173.149.242:41223 #24 (16 connections now open)
m31200| Wed Jun 13 22:33:01 [initandlisten] connection accepted from 184.173.149.242:41224 #25 (17 connections now open)
m31200| Wed Jun 13 22:33:01 [conn24] authenticate db: local { authenticate: 1, nonce: "a6b38baa067c969a", user: "__system", key: "85f1ff4fcbed31ae9e468a517450c710" }
m31200| Wed Jun 13 22:33:01 [conn25] authenticate db: local { authenticate: 1, nonce: "d4efb9568e9ad8ef", user: "__system", key: "ec0b223982419118fb7c3e5a7a23b777" }
m31201| Wed Jun 13 22:33:01 [rsSync] build index test.foo { x: 1.0 }
m30999| Wed Jun 13 22:33:02 [Balancer] starting new replica set monitor for replica set d2 with seed of tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202
m31200| Wed Jun 13 22:33:02 [initandlisten] connection accepted from 184.173.149.242:41225 #26 (18 connections now open)
m30999| Wed Jun 13 22:33:02 [Balancer] successfully connected to seed tp2.10gen.cc:31200 for replica set d2
m30999| Wed Jun 13 22:33:02 [Balancer] changing hosts to { 0: "tp2.10gen.cc:31200", 1: "tp2.10gen.cc:31202", 2: "tp2.10gen.cc:31201" } from d2/
m30999| Wed Jun 13 22:33:02 [Balancer] trying to add new host tp2.10gen.cc:31200 to replica set d2
m31200| Wed Jun 13 22:33:02 [initandlisten] connection accepted from 184.173.149.242:41226 #27 (19 connections now open)
m30999| Wed Jun 13 22:33:02 [Balancer] successfully connected to new host tp2.10gen.cc:31200 in replica set d2
m30999| Wed Jun 13 22:33:02 [Balancer] trying to add new host tp2.10gen.cc:31201 to replica set d2
m31201| Wed Jun 13 22:33:02 [initandlisten] connection accepted from 184.173.149.242:59548 #10 (9 connections now open)
m30999| Wed Jun 13 22:33:02 [Balancer] successfully connected to new host tp2.10gen.cc:31201 in replica set d2
m30999| Wed Jun 13 22:33:02 [Balancer] trying to add new host tp2.10gen.cc:31202 to replica set d2
m31202| Wed Jun 13 22:33:02 [initandlisten] connection accepted from 184.173.149.242:42992 #9 (9 connections now open)
m30999| Wed Jun 13 22:33:02 [Balancer] successfully connected to new host tp2.10gen.cc:31202 in replica set d2
m31200| Wed Jun 13 22:33:02 [initandlisten] connection accepted from 184.173.149.242:41229 #28 (20 connections now open)
m31200| Wed Jun 13 22:33:02 [conn28] authenticate db: local { authenticate: 1, nonce: "50bc79b5c7de2ba4", user: "__system", key: "c9a93d0c06bdb9abea1a15b0a10f561e" }
m31200| Wed Jun 13 22:33:02 [conn26] end connection 184.173.149.242:41225 (19 connections now open)
m30999| Wed Jun 13 22:33:02 [Balancer] Primary for replica set d2 changed to tp2.10gen.cc:31200
m31201| Wed Jun 13 22:33:02 [initandlisten] connection accepted from 184.173.149.242:59551 #11 (10 connections now open)
m31201| Wed Jun 13 22:33:02 [conn11] authenticate db: local { authenticate: 1, nonce: "c6ad484c46f61ef1", user: "__system", key: "a498383780226b5b426e157d0aa5f541" }
m31202| Wed Jun 13 22:33:02 [initandlisten] connection accepted from 184.173.149.242:42995 #10 (10 connections now open)
m31202| Wed Jun 13 22:33:02 [conn10] authenticate db: local { authenticate: 1, nonce: "6ba150992d4fcbdf", user: "__system", key: "338320d21be336c753e919e8a919289d" }
m30999| Wed Jun 13 22:33:02 [Balancer] replica set monitor for replica set d2 started, address is d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202
m31200| Wed Jun 13 22:33:02 [initandlisten] connection accepted from 184.173.149.242:41232 #29 (20 connections now open)
m31200| Wed Jun 13 22:33:02 [conn29] authenticate db: local { authenticate: 1, nonce: "ca875eb10a7d46f7", user: "__system", key: "ef37b0a6e7d5be49dd1d532aafd6d3a3" }
m30999| Wed Jun 13 22:33:02 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' acquired, ts : 4fd95b6e93454f4c315250fe
m31200| Wed Jun 13 22:33:02 [conn8] request split points lookup for chunk test.foo { : 27969.0 } -->> { : MaxKey }
m30999| Wed Jun 13 22:33:02 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:33:02 [Balancer] d1 maxSize: 0 currSize: 112 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:33:02 [Balancer] d2 maxSize: 0 currSize: 112 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:33:02 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:33:02 [Balancer] d1
m30999| Wed Jun 13 22:33:02 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:02 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:02 [Balancer] d2
m30999| Wed Jun 13 22:33:02 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 16732.0 }, shard: "d2" }
m30999| Wed Jun 13 22:33:02 [Balancer] { _id: "test.foo-x_16732.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 16732.0 }, max: { x: 27969.0 }, shard: "d2" }
m30999| Wed Jun 13 22:33:02 [Balancer] { _id: "test.foo-x_27969.0", lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 27969.0 }, max: { x: MaxKey }, shard: "d2" }
m30999| Wed Jun 13 22:33:02 [Balancer] ----
m30999| Wed Jun 13 22:33:02 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' unlocked.
m31201| Wed Jun 13 22:33:02 [rsSync] build index done. scanned 22120 total records. 0.18 secs
m31201| Wed Jun 13 22:33:02 [rsSync] replSet initial sync cloning indexes for : admin
m31200| Wed Jun 13 22:33:02 [conn24] end connection 184.173.149.242:41223 (19 connections now open)
m31200| Wed Jun 13 22:33:02 [initandlisten] connection accepted from 184.173.149.242:41233 #30 (20 connections now open)
m31200| Wed Jun 13 22:33:02 [conn30] authenticate db: local { authenticate: 1, nonce: "e8af01a2a6ef59b1", user: "__system", key: "4bf3cfec9041daad1867fdf428375f36" }
m31201| Wed Jun 13 22:33:02 [rsSync] replSet initial sync query minValid
m31200| Wed Jun 13 22:33:02 [conn30] end connection 184.173.149.242:41233 (19 connections now open)
m31202| Wed Jun 13 22:33:02 [rsSyncNotifier] replset setting oplog notifier to tp2.10gen.cc:31200
m31200| Wed Jun 13 22:33:02 [initandlisten] connection accepted from 184.173.149.242:41234 #31 (20 connections now open)
m31200| Wed Jun 13 22:33:02 [conn31] authenticate db: local { authenticate: 1, nonce: "e069c155a2e122a9", user: "__system", key: "dd8d02187966fc5fc1089311f3e67e4d" }
m31202| Wed Jun 13 22:33:02 [rsSync] replSet initial sync building indexes
m31202| Wed Jun 13 22:33:02 [rsSync] replSet initial sync cloning indexes for : test
m31200| Wed Jun 13 22:33:02 [initandlisten] connection accepted from 184.173.149.242:41235 #32 (21 connections now open)
m31200| Wed Jun 13 22:33:02 [conn32] authenticate db: local { authenticate: 1, nonce: "ca9317d91837083b", user: "__system", key: "afb9b94971171aa04d3d9adb8462a588" }
m31202| Wed Jun 13 22:33:02 [rsSync] build index test.foo { x: 1.0 }
m31000| Wed Jun 13 22:33:02 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' acquired, ts : 4fd95b6ee24b46bcab13cf4a
m31000| Wed Jun 13 22:33:02 [Balancer] ---- ShardInfoMap
m31000| Wed Jun 13 22:33:02 [Balancer] d1 maxSize: 0 currSize: 112 draining: 0 hasOpsQueued: 0
m31000| Wed Jun 13 22:33:02 [Balancer] d2 maxSize: 0 currSize: 112 draining: 0 hasOpsQueued: 0
m31000| Wed Jun 13 22:33:02 [Balancer] ---- ShardToChunksMap
m31000| Wed Jun 13 22:33:02 [Balancer] d1
m31000| Wed Jun 13 22:33:02 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:02 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:02 [Balancer] d2
m31000| Wed Jun 13 22:33:02 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 16732.0 }, shard: "d2" }
m31000| Wed Jun 13 22:33:02 [Balancer] { _id: "test.foo-x_16732.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 16732.0 }, max: { x: 27969.0 }, shard: "d2" }
m31000| Wed Jun 13 22:33:02 [Balancer] { _id: "test.foo-x_27969.0", lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 27969.0 }, max: { x: MaxKey }, shard: "d2" }
m31000| Wed Jun 13 22:33:02 [Balancer] ----
m31000| Wed Jun 13 22:33:02 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' unlocked.
m31201| Wed Jun 13 22:33:02 [rsSync] replSet initial sync finishing up
m31202| Wed Jun 13 22:33:02 [rsSync] build index done. scanned 23375 total records. 0.254 secs
m31201| Wed Jun 13 22:33:02 [rsSync] replSet set minValid=4fd95b6e:25c
m31200| Wed Jun 13 22:33:02 [conn32] end connection 184.173.149.242:41235 (20 connections now open)
m31202| Wed Jun 13 22:33:02 [rsSync] replSet initial sync cloning indexes for : admin
m31201| Wed Jun 13 22:33:02 [rsSync] build index local.replset.minvalid { _id: 1 }
m31201| Wed Jun 13 22:33:02 [rsSync] build index done. scanned 0 total records. 0 secs
m31200| Wed Jun 13 22:33:02 [initandlisten] connection accepted from 184.173.149.242:41236 #33 (21 connections now open)
m31200| Wed Jun 13 22:33:02 [conn33] authenticate db: local { authenticate: 1, nonce: "8bc2eac8af6b125e", user: "__system", key: "2a29a0c80c29735525ed4afe87640e93" }
m31202| Wed Jun 13 22:33:02 [rsSync] replSet initial sync query minValid
m31200| Wed Jun 13 22:33:02 [conn33] end connection 184.173.149.242:41236 (20 connections now open)
m31201| Wed Jun 13 22:33:02 [rsSync] replSet initial sync done
m31200| Wed Jun 13 22:33:02 [conn16] end connection 184.173.149.242:41215 (19 connections now open)
m31200| Wed Jun 13 22:33:02 [FileAllocator] allocating new datafile /data/db/d2-0/test.1, filling with zeroes...
m31200| Wed Jun 13 22:33:02 [FileAllocator] done allocating datafile /data/db/d2-0/test.1, size: 32MB, took 0.058 secs
m31200| Wed Jun 13 22:33:02 [conn13] datafileheader::init initializing /data/db/d2-0/test.1 n:1
m31200| Wed Jun 13 22:33:02 [conn8] request split points lookup for chunk test.foo { : 27969.0 } -->> { : MaxKey }
m31200| Wed Jun 13 22:33:02 [conn25] getmore local.oplog.rs query: { ts: { $gte: new Date(0) } } cursorid:1040914374959375703 ntoreturn:0 keyUpdates:0 numYields: 192 locks(micros) r:268495 nreturned:27150 reslen:461570 381ms
m31202| Wed Jun 13 22:33:02 [rsSync] replSet initial sync finishing up
m31202| Wed Jun 13 22:33:02 [rsSync] replSet set minValid=4fd95b6e:82d
m31202| Wed Jun 13 22:33:02 [rsSync] build index local.replset.minvalid { _id: 1 }
m31202| Wed Jun 13 22:33:02 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Wed Jun 13 22:33:02 [rsSync] replSet initial sync done
m31200| Wed Jun 13 22:33:02 [conn18] end connection 184.173.149.242:41217 (18 connections now open)
m31200| Wed Jun 13 22:33:02 [conn8] request split points lookup for chunk test.foo { : 27969.0 } -->> { : MaxKey }
m31200| Wed Jun 13 22:33:03 [initandlisten] connection accepted from 184.173.149.242:41237 #34 (19 connections now open)
m31200| Wed Jun 13 22:33:03 [conn34] authenticate db: local { authenticate: 1, nonce: "1aa53ce0e36cb341", user: "__system", key: "b56612635e3eb48fbbdf623ec1a7ed79" }
m31201| Wed Jun 13 22:33:03 [initandlisten] connection accepted from 184.173.149.242:59559 #12 (11 connections now open)
m31201| Wed Jun 13 22:33:03 [conn12] authenticate db: local { authenticate: 1, nonce: "d7dc4c15ff49aa0c", user: "__system", key: "1a1f27dab57480691e1edb50e26266ab" }
m31202| Wed Jun 13 22:33:03 [initandlisten] connection accepted from 184.173.149.242:43003 #11 (11 connections now open)
m31202| Wed Jun 13 22:33:03 [conn11] authenticate db: local { authenticate: 1, nonce: "d1f7341550ec6aae", user: "__system", key: "6a573a773c2667d3bacc9822d4ad50f3" }
m31200| Wed Jun 13 22:33:03 [conn8] request split points lookup for chunk test.foo { : 27969.0 } -->> { : MaxKey }
m31201| Wed Jun 13 22:33:03 [rsSync] replSet SECONDARY
m31200| Wed Jun 13 22:33:03 [conn8] request split points lookup for chunk test.foo { : 27969.0 } -->> { : MaxKey }
m31202| Wed Jun 13 22:33:03 [rsSync] replSet SECONDARY
m31200| Wed Jun 13 22:33:03 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 27969.0 } -->> { : MaxKey }
m31200| Wed Jun 13 22:33:03 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 27969.0 }, max: { x: MaxKey }, from: "d2", splitKeys: [ { x: 38475.0 } ], shardId: "test.foo-x_27969.0", configdb: "tp2.10gen.cc:29000" }
m31200| Wed Jun 13 22:33:03 [conn8] created new distributed lock for test.foo on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31200| Wed Jun 13 22:33:03 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31200:1339644779:563366256' acquired, ts : 4fd95b6fb84ac0176ed7bccd
m31200| Wed Jun 13 22:33:03 [conn8] splitChunk accepted at version 2|5||4fd95b47e24b46bcab13cf46
m31200| Wed Jun 13 22:33:03 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:33:03-3", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:41188", time: new Date(1339644783855), what: "split", ns: "test.foo", details: { before: { min: { x: 27969.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 27969.0 }, max: { x: 38475.0 }, lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') }, right: { min: { x: 38475.0 }, max: { x: MaxKey }, lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') } } }
m31200| Wed Jun 13 22:33:03 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31200:1339644779:563366256' unlocked.
m31000| Wed Jun 13 22:33:03 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 2|7||4fd95b47e24b46bcab13cf46 based on: 2|5||4fd95b47e24b46bcab13cf46
m31000| Wed Jun 13 22:33:03 [conn] autosplitted test.foo shard: ns:test.foo at: d2:d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202 lastmod: 2|5||000000000000000000000000 min: { x: 27969.0 } max: { x: MaxKey } on: { x: 38475.0 } (splitThreshold 943718) size: 1048600 (migrate suggested)
m31000| Wed Jun 13 22:33:03 [conn] moving chunk (auto): ns:test.foo at: d2:d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202 lastmod: 2|7||000000000000000000000000 min: { x: 38475.0 } max: { x: MaxKey } to: d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31000| Wed Jun 13 22:33:03 [conn] moving chunk ns: test.foo moving ( ns:test.foo at: d2:d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202 lastmod: 2|7||000000000000000000000000 min: { x: 38475.0 } max: { x: MaxKey }) d2:d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202 -> d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31200| Wed Jun 13 22:33:03 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202", to: "d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102", fromShard: "d2", toShard: "d1", min: { x: 38475.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_38475.0", configdb: "tp2.10gen.cc:29000" }
m31200| Wed Jun 13 22:33:03 [conn8] created new distributed lock for test.foo on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31200| Wed Jun 13 22:33:03 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31200:1339644779:563366256' acquired, ts : 4fd95b6fb84ac0176ed7bcce
m31200| Wed Jun 13 22:33:03 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:33:03-4", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:41188", time: new Date(1339644783909), what: "moveChunk.start", ns: "test.foo", details: { min: { x: 38475.0 }, max: { x: MaxKey }, from: "d2", to: "d1" } }
m31200| Wed Jun 13 22:33:03 [conn8] moveChunk request accepted at version 2|7||4fd95b47e24b46bcab13cf46
m31200| Wed Jun 13 22:33:03 [conn8] moveChunk number of documents: 1
m31201| Wed Jun 13 22:33:04 [FileAllocator] allocating new datafile /data/db/d2-1/test.1, filling with zeroes...
m31201| Wed Jun 13 22:33:04 [FileAllocator] done allocating datafile /data/db/d2-1/test.1, size: 32MB, took 0.086 secs
m31201| Wed Jun 13 22:33:04 [rsSync] datafileheader::init initializing /data/db/d2-1/test.1 n:1
m31202| Wed Jun 13 22:33:04 [FileAllocator] allocating new datafile /data/db/d2-2/test.1, filling with zeroes...
m31200| Wed Jun 13 22:33:04 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state SECONDARY
m31200| Wed Jun 13 22:33:04 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state SECONDARY
m31100| Wed Jun 13 22:33:04 [clientcursormon] mem (MB) res:55 virt:357 mapped:112
m31202| Wed Jun 13 22:33:04 [FileAllocator] done allocating datafile /data/db/d2-2/test.1, size: 32MB, took 0.216 secs
m31202| Wed Jun 13 22:33:04 [rsSync] datafileheader::init initializing /data/db/d2-2/test.1 n:1
m31201| Wed Jun 13 22:33:04 [rsHealthPoll] replSet member tp2.10gen.cc:31202 is now in state SECONDARY
m31101| Wed Jun 13 22:33:04 [clientcursormon] mem (MB) res:54 virt:339 mapped:128
m31202| Wed Jun 13 22:33:04 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state SECONDARY
m31102| Wed Jun 13 22:33:04 [clientcursormon] mem (MB) res:54 virt:339 mapped:128
m31200| Wed Jun 13 22:33:04 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202", min: { x: 38475.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "catchup", counts: { cloned: 1, clonedBytes: 97, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m31200| Wed Jun 13 22:33:05 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202", min: { x: 38475.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "catchup", counts: { cloned: 1, clonedBytes: 97, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m31100| Wed Jun 13 22:33:05 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 38475.0 } -> { x: MaxKey }
m31200| Wed Jun 13 22:33:06 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202", min: { x: 38475.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 97, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m31200| Wed Jun 13 22:33:06 [conn8] moveChunk setting version to: 3|0||4fd95b47e24b46bcab13cf46
m31100| Wed Jun 13 22:33:06 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: 38475.0 } -> { x: MaxKey }
m31100| Wed Jun 13 22:33:06 [migrateThread] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:33:06-5", server: "tp2.10gen.cc", clientAddr: ":27017", time: new Date(1339644786922), what: "moveChunk.to", ns: "test.foo", details: { min: { x: 38475.0 }, max: { x: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 3010 } }
m31200| Wed Jun 13 22:33:06 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202", min: { x: 38475.0 }, max: { x: MaxKey }, shardKeyPattern: { x: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 97, catchup: 0, steady: 0 }, ok: 1.0 }
m31200| Wed Jun 13 22:33:06 [conn8] moveChunk updating self version to: 3|1||4fd95b47e24b46bcab13cf46 through { x: 5850.0 } -> { x: 16732.0 } for collection 'test.foo'
m31200| Wed Jun 13 22:33:06 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:33:06-5", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:41188", time: new Date(1339644786924), what: "moveChunk.commit", ns: "test.foo", details: { min: { x: 38475.0 }, max: { x: MaxKey }, from: "d2", to: "d1" } }
m31200| Wed Jun 13 22:33:06 [conn8] doing delete inline
m31200| Wed Jun 13 22:33:06 [conn8] moveChunk deleted: 1
m31200| Wed Jun 13 22:33:07 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31200:1339644779:563366256' unlocked.
m31200| Wed Jun 13 22:33:07 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:33:07-6", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:41188", time: new Date(1339644787927), what: "moveChunk.from", ns: "test.foo", details: { min: { x: 38475.0 }, max: { x: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 3001, step5 of 6: 12, step6 of 6: 1001 } }
m31200| Wed Jun 13 22:33:07 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202", to: "d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102", fromShard: "d2", toShard: "d1", min: { x: 38475.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-x_38475.0", configdb: "tp2.10gen.cc:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:9 r:617474 w:1186 reslen:37 4018ms
m31000| Wed Jun 13 22:33:07 [conn] ChunkManager: time to load chunks for test.foo: 2ms sequenceNumber: 9 version: 3|1||4fd95b47e24b46bcab13cf46 based on: 2|7||4fd95b47e24b46bcab13cf46
m31100| Wed Jun 13 22:33:09 [conn8] request split points lookup for chunk test.foo { : 38475.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:09 [conn8] request split points lookup for chunk test.foo { : 38475.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:10 [conn8] request split points lookup for chunk test.foo { : 38475.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:10 [conn8] request split points lookup for chunk test.foo { : 38475.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:10 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 38475.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:10 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 38475.0 }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 50152.0 } ], shardId: "test.foo-x_38475.0", configdb: "tp2.10gen.cc:29000" }
m31100| Wed Jun 13 22:33:10 [conn8] created new distributed lock for test.foo on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Wed Jun 13 22:33:10 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' acquired, ts : 4fd95b768529b0c3a8f3d66d
m31100| Wed Jun 13 22:33:10 [conn8] splitChunk accepted at version 3|0||4fd95b47e24b46bcab13cf46
m31100| Wed Jun 13 22:33:10 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:33:10-6", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:42838", time: new Date(1339644790570), what: "split", ns: "test.foo", details: { before: { min: { x: 38475.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 38475.0 }, max: { x: 50152.0 }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') }, right: { min: { x: 50152.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') } } }
m31100| Wed Jun 13 22:33:10 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' unlocked.
m31000| Wed Jun 13 22:33:10 [conn] ChunkManager: time to load chunks for test.foo: 2ms sequenceNumber: 10 version: 3|3||4fd95b47e24b46bcab13cf46 based on: 3|1||4fd95b47e24b46bcab13cf46
m31000| Wed Jun 13 22:33:10 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102 lastmod: 3|0||000000000000000000000000 min: { x: 38475.0 } max: { x: MaxKey } on: { x: 50152.0 } (splitThreshold 943718) size: 1048600 (migrate suggested)
m31100| Wed Jun 13 22:33:10 [conn8] request split points lookup for chunk test.foo { : 50152.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:11 [conn8] request split points lookup for chunk test.foo { : 50152.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:11 [conn8] request split points lookup for chunk test.foo { : 50152.0 } -->> { : MaxKey }
m30999| Wed Jun 13 22:33:12 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' acquired, ts : 4fd95b7893454f4c315250ff
m30999| Wed Jun 13 22:33:12 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:33:12 [Balancer] d1 maxSize: 0 currSize: 112 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:33:12 [Balancer] d2 maxSize: 0 currSize: 144 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:33:12 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:33:12 [Balancer] d1
m30999| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_38475.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 38475.0 }, max: { x: 50152.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_50152.0", lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 50152.0 }, max: { x: MaxKey }, shard: "d1" }
m30999| Wed Jun 13 22:33:12 [Balancer] d2
m30999| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 16732.0 }, shard: "d2" }
m30999| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_16732.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 16732.0 }, max: { x: 27969.0 }, shard: "d2" }
m30999| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_27969.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 27969.0 }, max: { x: 38475.0 }, shard: "d2" }
m30999| Wed Jun 13 22:33:12 [Balancer] ----
m30999| Wed Jun 13 22:33:12 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' unlocked.
m31100| Wed Jun 13 22:33:12 [conn8] request split points lookup for chunk test.foo { : 50152.0 } -->> { : MaxKey }
m31000| Wed Jun 13 22:33:12 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' acquired, ts : 4fd95b78e24b46bcab13cf4b
m31000| Wed Jun 13 22:33:12 [Balancer] ---- ShardInfoMap
m31000| Wed Jun 13 22:33:12 [Balancer] d1 maxSize: 0 currSize: 112 draining: 0 hasOpsQueued: 0
m31000| Wed Jun 13 22:33:12 [Balancer] d2 maxSize: 0 currSize: 144 draining: 0 hasOpsQueued: 0
m31000| Wed Jun 13 22:33:12 [Balancer] ---- ShardToChunksMap
m31000| Wed Jun 13 22:33:12 [Balancer] d1
m31000| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_38475.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 38475.0 }, max: { x: 50152.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_50152.0", lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 50152.0 }, max: { x: MaxKey }, shard: "d1" }
m31000| Wed Jun 13 22:33:12 [Balancer] d2
m31000| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 16732.0 }, shard: "d2" }
m31000| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_16732.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 16732.0 }, max: { x: 27969.0 }, shard: "d2" }
m31000| Wed Jun 13 22:33:12 [Balancer] { _id: "test.foo-x_27969.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 27969.0 }, max: { x: 38475.0 }, shard: "d2" }
m31000| Wed Jun 13 22:33:12 [Balancer] ----
m31000| Wed Jun 13 22:33:12 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' unlocked.
m31202| Wed Jun 13 22:33:12 [conn3] end connection 184.173.149.242:42937 (10 connections now open)
m31202| Wed Jun 13 22:33:12 [initandlisten] connection accepted from 184.173.149.242:43004 #12 (11 connections now open)
m31202| Wed Jun 13 22:33:12 [conn12] authenticate db: local { authenticate: 1, nonce: "1f2ee62170be1c78", user: "__system", key: "812830340002cde469c699a31330b926" }
m31102| Wed Jun 13 22:33:12 [conn10] end connection 184.173.149.242:52749 (10 connections now open)
m31102| Wed Jun 13 22:33:12 [initandlisten] connection accepted from 184.173.149.242:52816 #14 (11 connections now open)
m31102| Wed Jun 13 22:33:12 [conn14] authenticate db: local { authenticate: 1, nonce: "b5e57591a5898525", user: "__system", key: "d3c64c16cc506fc86c996a03fcf41187" }
m31100| Wed Jun 13 22:33:12 [FileAllocator] allocating new datafile /data/db/d1-0/test.1, filling with zeroes...
m31100| Wed Jun 13 22:33:12 [FileAllocator] done allocating datafile /data/db/d1-0/test.1, size: 32MB, took 0.058 secs
m31100| Wed Jun 13 22:33:12 [conn9] datafileheader::init initializing /data/db/d1-0/test.1 n:1
m31102| Wed Jun 13 22:33:12 [FileAllocator] allocating new datafile /data/db/d1-2/test.1, filling with zeroes...
m31101| Wed Jun 13 22:33:12 [FileAllocator] allocating new datafile /data/db/d1-1/test.1, filling with zeroes...
m31100| Wed Jun 13 22:33:12 [conn8] request split points lookup for chunk test.foo { : 50152.0 } -->> { : MaxKey }
m31102| Wed Jun 13 22:33:12 [FileAllocator] done allocating datafile /data/db/d1-2/test.1, size: 32MB, took 0.072 secs
m31102| Wed Jun 13 22:33:12 [rsSync] datafileheader::init initializing /data/db/d1-2/test.1 n:1
m31101| Wed Jun 13 22:33:12 [FileAllocator] done allocating datafile /data/db/d1-1/test.1, size: 32MB, took 0.081 secs
m31101| Wed Jun 13 22:33:12 [rsSync] datafileheader::init initializing /data/db/d1-1/test.1 n:1
m31100| Wed Jun 13 22:33:13 [conn8] request split points lookup for chunk test.foo { : 50152.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:13 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 50152.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:13 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 50152.0 }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 61501.0 } ], shardId: "test.foo-x_50152.0", configdb: "tp2.10gen.cc:29000" }
m31100| Wed Jun 13 22:33:13 [conn8] created new distributed lock for test.foo on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Wed Jun 13 22:33:13 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' acquired, ts : 4fd95b798529b0c3a8f3d66e
m31100| Wed Jun 13 22:33:13 [conn8] splitChunk accepted at version 3|3||4fd95b47e24b46bcab13cf46
m29000| Wed Jun 13 22:33:13 [conn14] info PageFaultRetryableSection will not yield, already locked upon reaching
m31100| Wed Jun 13 22:33:13 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:33:13-7", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:42838", time: new Date(1339644793167), what: "split", ns: "test.foo", details: { before: { min: { x: 50152.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 50152.0 }, max: { x: 61501.0 }, lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') }, right: { min: { x: 61501.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') } } }
m31100| Wed Jun 13 22:33:13 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' unlocked.
m31000| Wed Jun 13 22:33:13 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 11 version: 3|5||4fd95b47e24b46bcab13cf46 based on: 3|3||4fd95b47e24b46bcab13cf46
m31000| Wed Jun 13 22:33:13 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102 lastmod: 3|3||000000000000000000000000 min: { x: 50152.0 } max: { x: MaxKey } on: { x: 61501.0 } (splitThreshold 943718) size: 1048600 (migrate suggested)
m31100| Wed Jun 13 22:33:13 [conn8] request split points lookup for chunk test.foo { : 61501.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:13 [conn8] request split points lookup for chunk test.foo { : 61501.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:14 [conn8] request split points lookup for chunk test.foo { : 61501.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:14 [conn8] request split points lookup for chunk test.foo { : 61501.0 } -->> { : MaxKey }
m31202| Wed Jun 13 22:33:14 [conn4] end connection 184.173.149.242:42943 (10 connections now open)
m31202| Wed Jun 13 22:33:14 [initandlisten] connection accepted from 184.173.149.242:43006 #13 (12 connections now open)
m31202| Wed Jun 13 22:33:14 [conn13] authenticate db: local { authenticate: 1, nonce: "2df4d001ca0e9008", user: "__system", key: "94cff5d63fc6a0ca84453d3226c62a42" }
m31102| Wed Jun 13 22:33:14 [conn11] end connection 184.173.149.242:52751 (10 connections now open)
m31102| Wed Jun 13 22:33:14 [initandlisten] connection accepted from 184.173.149.242:52818 #15 (11 connections now open)
m31102| Wed Jun 13 22:33:14 [conn15] authenticate db: local { authenticate: 1, nonce: "d2bcba4f7d72ea42", user: "__system", key: "dda0040691bfee979373028fc1fe406d" }
m31201| Wed Jun 13 22:33:14 [conn4] end connection 184.173.149.242:59501 (10 connections now open)
m31201| Wed Jun 13 22:33:14 [initandlisten] connection accepted from 184.173.149.242:59565 #13 (11 connections now open)
m31201| Wed Jun 13 22:33:14 [conn13] authenticate db: local { authenticate: 1, nonce: "a7f53f675201fe5", user: "__system", key: "e0c089b2427d34d91840671fe446b266" }
m31100| Wed Jun 13 22:33:14 [conn8] request split points lookup for chunk test.foo { : 61501.0 } -->> { : MaxKey }
m31101| Wed Jun 13 22:33:14 [conn12] end connection 184.173.149.242:56595 (11 connections now open)
m31101| Wed Jun 13 22:33:14 [initandlisten] connection accepted from 184.173.149.242:56662 #16 (12 connections now open)
m31101| Wed Jun 13 22:33:14 [conn16] authenticate db: local { authenticate: 1, nonce: "493a464467ed148a", user: "__system", key: "0234bfba6cae43df98d866ae459afc54" }
m31100| Wed Jun 13 22:33:15 [conn8] request split points lookup for chunk test.foo { : 61501.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:15 [conn8] request split points lookup for chunk test.foo { : 61501.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:15 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 61501.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:15 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 61501.0 }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 73876.0 } ], shardId: "test.foo-x_61501.0", configdb: "tp2.10gen.cc:29000" }
m31100| Wed Jun 13 22:33:15 [conn8] created new distributed lock for test.foo on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Wed Jun 13 22:33:15 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' acquired, ts : 4fd95b7b8529b0c3a8f3d66f
m31100| Wed Jun 13 22:33:15 [conn8] splitChunk accepted at version 3|5||4fd95b47e24b46bcab13cf46
m31100| Wed Jun 13 22:33:15 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:33:15-8", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:42838", time: new Date(1339644795540), what: "split", ns: "test.foo", details: { before: { min: { x: 61501.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 61501.0 }, max: { x: 73876.0 }, lastmod: Timestamp 3000|6, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') }, right: { min: { x: 73876.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|7, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') } } }
m31100| Wed Jun 13 22:33:15 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' unlocked.
m31000| Wed Jun 13 22:33:15 [conn] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 3|7||4fd95b47e24b46bcab13cf46 based on: 3|5||4fd95b47e24b46bcab13cf46
m31000| Wed Jun 13 22:33:15 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102 lastmod: 3|5||000000000000000000000000 min: { x: 61501.0 } max: { x: MaxKey } on: { x: 73876.0 } (splitThreshold 943718) size: 1048600 (migrate suggested)
m31100| Wed Jun 13 22:33:15 [conn8] request split points lookup for chunk test.foo { : 73876.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:16 [conn8] request split points lookup for chunk test.foo { : 73876.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:16 [conn8] request split points lookup for chunk test.foo { : 73876.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:16 [conn8] request split points lookup for chunk test.foo { : 73876.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:17 [conn8] request split points lookup for chunk test.foo { : 73876.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:17 [conn8] request split points lookup for chunk test.foo { : 73876.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:18 [conn8] request split points lookup for chunk test.foo { : 73876.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:18 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 73876.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:18 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 73876.0 }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 85769.0 } ], shardId: "test.foo-x_73876.0", configdb: "tp2.10gen.cc:29000" }
m31100| Wed Jun 13 22:33:18 [conn8] created new distributed lock for test.foo on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Wed Jun 13 22:33:18 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' acquired, ts : 4fd95b7e8529b0c3a8f3d670
m31100| Wed Jun 13 22:33:18 [conn8] splitChunk accepted at version 3|7||4fd95b47e24b46bcab13cf46
m31100| Wed Jun 13 22:33:18 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:33:18-9", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:42838", time: new Date(1339644798296), what: "split", ns: "test.foo", details: { before: { min: { x: 73876.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 73876.0 }, max: { x: 85769.0 }, lastmod: Timestamp 3000|8, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') }, right: { min: { x: 85769.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|9, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') } } }
m31100| Wed Jun 13 22:33:18 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' unlocked.
m31000| Wed Jun 13 22:33:18 [conn] ChunkManager: time to load chunks for test.foo: 2ms sequenceNumber: 13 version: 3|9||4fd95b47e24b46bcab13cf46 based on: 3|7||4fd95b47e24b46bcab13cf46
m31000| Wed Jun 13 22:33:18 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102 lastmod: 3|7||000000000000000000000000 min: { x: 73876.0 } max: { x: MaxKey } on: { x: 85769.0 } (splitThreshold 943718) size: 1048600 (migrate suggested)
m31100| Wed Jun 13 22:33:18 [conn8] request split points lookup for chunk test.foo { : 85769.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:18 [conn8] request split points lookup for chunk test.foo { : 85769.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:19 [conn8] request split points lookup for chunk test.foo { : 85769.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:19 [conn8] request split points lookup for chunk test.foo { : 85769.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:20 [conn8] request split points lookup for chunk test.foo { : 85769.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:20 [conn8] request split points lookup for chunk test.foo { : 85769.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:20 [conn8] max number of requested split points reached (2) before the end of chunk test.foo { : 85769.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:20 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: 85769.0 }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 97359.0 } ], shardId: "test.foo-x_85769.0", configdb: "tp2.10gen.cc:29000" }
m31100| Wed Jun 13 22:33:20 [conn8] created new distributed lock for test.foo on tp2.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m31100| Wed Jun 13 22:33:20 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' acquired, ts : 4fd95b808529b0c3a8f3d671
m31100| Wed Jun 13 22:33:20 [conn8] splitChunk accepted at version 3|9||4fd95b47e24b46bcab13cf46
m31100| Wed Jun 13 22:33:20 [conn8] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:33:20-10", server: "tp2.10gen.cc", clientAddr: "184.173.149.242:42838", time: new Date(1339644800533), what: "split", ns: "test.foo", details: { before: { min: { x: 85769.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { x: 85769.0 }, max: { x: 97359.0 }, lastmod: Timestamp 3000|10, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') }, right: { min: { x: 97359.0 }, max: { x: MaxKey }, lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46') } } }
m31100| Wed Jun 13 22:33:20 [conn8] distributed lock 'test.foo/tp2.10gen.cc:31100:1339644773:1331256593' unlocked.
m31000| Wed Jun 13 22:33:20 [conn] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 14 version: 3|11||4fd95b47e24b46bcab13cf46 based on: 3|9||4fd95b47e24b46bcab13cf46
m31000| Wed Jun 13 22:33:20 [conn] autosplitted test.foo shard: ns:test.foo at: d1:d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102 lastmod: 3|9||000000000000000000000000 min: { x: 85769.0 } max: { x: MaxKey } on: { x: 97359.0 } (splitThreshold 943718) size: 1048600 (migrate suggested)
m31100| Wed Jun 13 22:33:20 [conn8] request split points lookup for chunk test.foo { : 97359.0 } -->> { : MaxKey }
m31100| Wed Jun 13 22:33:21 [conn8] request split points lookup for chunk test.foo { : 97359.0 } -->> { : MaxKey }
null
chunks: 8 3 11
m31000| range.universal(): 1
m31200| Wed Jun 13 22:33:21 [conn34] getmore test.foo cursorid:123859683836260460 ntoreturn:0 keyUpdates:0 numYields: 34 locks(micros) r:208918 nreturned:32524 reslen:3154848 209ms
m31100| Wed Jun 13 22:33:21 [conn30] getmore test.foo cursorid:6418280023019319410 ntoreturn:0 keyUpdates:0 numYields: 46 locks(micros) r:271393 nreturned:43240 reslen:4194300 271ms
m30999| Wed Jun 13 22:33:22 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' acquired, ts : 4fd95b8293454f4c31525100
m30999| Wed Jun 13 22:33:22 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:33:22 [Balancer] d1 maxSize: 0 currSize: 144 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:33:22 [Balancer] d2 maxSize: 0 currSize: 144 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:33:22 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:33:22 [Balancer] d1
m30999| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_38475.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 38475.0 }, max: { x: 50152.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_50152.0", lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 50152.0 }, max: { x: 61501.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_61501.0", lastmod: Timestamp 3000|6, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 61501.0 }, max: { x: 73876.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_73876.0", lastmod: Timestamp 3000|8, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 73876.0 }, max: { x: 85769.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_85769.0", lastmod: Timestamp 3000|10, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 85769.0 }, max: { x: 97359.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_97359.0", lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 97359.0 }, max: { x: MaxKey }, shard: "d1" }
m30999| Wed Jun 13 22:33:22 [Balancer] d2
m30999| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 16732.0 }, shard: "d2" }
m30999| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_16732.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 16732.0 }, max: { x: 27969.0 }, shard: "d2" }
m30999| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_27969.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 27969.0 }, max: { x: 38475.0 }, shard: "d2" }
m30999| Wed Jun 13 22:33:22 [Balancer] ----
m30999| Wed Jun 13 22:33:22 [Balancer] chose [d1] to [d2] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:22 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 2 version: 3|11||4fd95b47e24b46bcab13cf46 based on: (empty)
m30999| Wed Jun 13 22:33:22 [Balancer] dev: lastError==0 won't report:right object ({}) doesn't have full shard key ({ x: 1.0 })
m30999| Wed Jun 13 22:33:22 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' unlocked.
m30999| Wed Jun 13 22:33:22 [Balancer] scoped connection to tp2.10gen.cc:29000 not being returned to the pool
m29000| Wed Jun 13 22:33:22 [conn5] end connection 184.173.149.242:55430 (15 connections now open)
m30999| Wed Jun 13 22:33:22 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ x: 1.0 })
m31000| Wed Jun 13 22:33:22 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' acquired, ts : 4fd95b82e24b46bcab13cf4c
m31000| Wed Jun 13 22:33:22 [Balancer] ---- ShardInfoMap
m31000| Wed Jun 13 22:33:22 [Balancer] d1 maxSize: 0 currSize: 144 draining: 0 hasOpsQueued: 0
m31000| Wed Jun 13 22:33:22 [Balancer] d2 maxSize: 0 currSize: 144 draining: 0 hasOpsQueued: 0
m31000| Wed Jun 13 22:33:22 [Balancer] ---- ShardToChunksMap
m31000| Wed Jun 13 22:33:22 [Balancer] d1
m31000| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_38475.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 38475.0 }, max: { x: 50152.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_50152.0", lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 50152.0 }, max: { x: 61501.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_61501.0", lastmod: Timestamp 3000|6, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 61501.0 }, max: { x: 73876.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_73876.0", lastmod: Timestamp 3000|8, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 73876.0 }, max: { x: 85769.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_85769.0", lastmod: Timestamp 3000|10, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 85769.0 }, max: { x: 97359.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_97359.0", lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 97359.0 }, max: { x: MaxKey }, shard: "d1" }
m31000| Wed Jun 13 22:33:22 [Balancer] d2
m31000| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 16732.0 }, shard: "d2" }
m31000| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_16732.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 16732.0 }, max: { x: 27969.0 }, shard: "d2" }
m31000| Wed Jun 13 22:33:22 [Balancer] { _id: "test.foo-x_27969.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 27969.0 }, max: { x: 38475.0 }, shard: "d2" }
m31000| Wed Jun 13 22:33:22 [Balancer] ----
m31000| Wed Jun 13 22:33:22 [Balancer] chose [d1] to [d2] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:22 [Balancer] dev: lastError==0 won't report:right object ({}) doesn't have full shard key ({ x: 1.0 })
m31000| Wed Jun 13 22:33:22 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' unlocked.
m31000| Wed Jun 13 22:33:22 [Balancer] scoped connection to tp2.10gen.cc:29000 not being returned to the pool
m31000| Wed Jun 13 22:33:22 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ x: 1.0 })
m29000| Wed Jun 13 22:33:22 [conn10] end connection 184.173.149.242:55437 (14 connections now open)
m31100| Wed Jun 13 22:33:22 [conn30] getmore test.foo cursorid:6418280023019319410 ntoreturn:0 keyUpdates:0 numYields: 17 locks(micros) r:423652 nreturned:24034 reslen:2331318 152ms
ReplSetTest waitForIndicator state on connection to tp2.10gen.cc:31201
[ 2 ]
ReplSetTest waitForIndicator from node connection to tp2.10gen.cc:31201
ReplSetTest waitForIndicator Initial status ( timeout : 300000 ) :
{
"set" : "d2",
"date" : ISODate("2012-06-14T03:33:22Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "tp2.10gen.cc:31200",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 48,
"optime" : Timestamp(1339644786000, 1),
"optimeDate" : ISODate("2012-06-14T03:33:06Z"),
"self" : true
},
{
"_id" : 1,
"name" : "tp2.10gen.cc:31201",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339644786000, 1),
"optimeDate" : ISODate("2012-06-14T03:33:06Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:33:22Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "tp2.10gen.cc:31202",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339644786000, 1),
"optimeDate" : ISODate("2012-06-14T03:33:06Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:33:22Z"),
"pingMs" : 0
}
],
"ok" : 1
}
Status for : tp2.10gen.cc:31200, checking tp2.10gen.cc:31201/tp2.10gen.cc:31201
Status for : tp2.10gen.cc:31201, checking tp2.10gen.cc:31201/tp2.10gen.cc:31201
Status : 2 target state : 2
ReplSetTest waitForIndicator final status:
{
"set" : "d2",
"date" : ISODate("2012-06-14T03:33:22Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "tp2.10gen.cc:31200",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 48,
"optime" : Timestamp(1339644786000, 1),
"optimeDate" : ISODate("2012-06-14T03:33:06Z"),
"self" : true
},
{
"_id" : 1,
"name" : "tp2.10gen.cc:31201",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339644786000, 1),
"optimeDate" : ISODate("2012-06-14T03:33:06Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:33:22Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "tp2.10gen.cc:31202",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339644786000, 1),
"optimeDate" : ISODate("2012-06-14T03:33:06Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:33:22Z"),
"pingMs" : 0
}
],
"ok" : 1
}
ReplSetTest waitForIndicator state on connection to tp2.10gen.cc:31202
[ 2 ]
ReplSetTest waitForIndicator from node connection to tp2.10gen.cc:31202
ReplSetTest waitForIndicator Initial status ( timeout : 300000 ) :
{
"set" : "d2",
"date" : ISODate("2012-06-14T03:33:22Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "tp2.10gen.cc:31200",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 48,
"optime" : Timestamp(1339644786000, 1),
"optimeDate" : ISODate("2012-06-14T03:33:06Z"),
"self" : true
},
{
"_id" : 1,
"name" : "tp2.10gen.cc:31201",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339644786000, 1),
"optimeDate" : ISODate("2012-06-14T03:33:06Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:33:22Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "tp2.10gen.cc:31202",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339644786000, 1),
"optimeDate" : ISODate("2012-06-14T03:33:06Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:33:22Z"),
"pingMs" : 0
}
],
"ok" : 1
}
Status for : tp2.10gen.cc:31200, checking tp2.10gen.cc:31202/tp2.10gen.cc:31202
Status for : tp2.10gen.cc:31201, checking tp2.10gen.cc:31202/tp2.10gen.cc:31202
Status for : tp2.10gen.cc:31202, checking tp2.10gen.cc:31202/tp2.10gen.cc:31202
Status : 2 target state : 2
ReplSetTest waitForIndicator final status:
{
"set" : "d2",
"date" : ISODate("2012-06-14T03:33:22Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "tp2.10gen.cc:31200",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 48,
"optime" : Timestamp(1339644786000, 1),
"optimeDate" : ISODate("2012-06-14T03:33:06Z"),
"self" : true
},
{
"_id" : 1,
"name" : "tp2.10gen.cc:31201",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339644786000, 1),
"optimeDate" : ISODate("2012-06-14T03:33:06Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:33:22Z"),
"pingMs" : 0
},
{
"_id" : 2,
"name" : "tp2.10gen.cc:31202",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 38,
"optime" : Timestamp(1339644786000, 1),
"optimeDate" : ISODate("2012-06-14T03:33:06Z"),
"lastHeartbeat" : ISODate("2012-06-14T03:33:22Z"),
"pingMs" : 0
}
],
"ok" : 1
}
m31100| Wed Jun 13 22:33:22 [FileAllocator] allocating new datafile /data/db/d1-0/admin.ns, filling with zeroes...
{
"user" : "foo",
"readOnly" : false,
"pwd" : "3563025c1e89c7ad43fb63fcbcf1c3c6",
"_id" : ObjectId("4fd95b82e204cf4c84a2c190")
}
m31100| Wed Jun 13 22:33:22 [FileAllocator] done allocating datafile /data/db/d1-0/admin.ns, size: 16MB, took 0.037 secs
m31100| Wed Jun 13 22:33:22 [FileAllocator] allocating new datafile /data/db/d1-0/admin.0, filling with zeroes...
m31100| Wed Jun 13 22:33:22 [FileAllocator] done allocating datafile /data/db/d1-0/admin.0, size: 16MB, took 0.036 secs
m31100| Wed Jun 13 22:33:22 [conn2] datafileheader::init initializing /data/db/d1-0/admin.0 n:0
m31100| Wed Jun 13 22:33:22 [conn2] build index admin.system.users { _id: 1 }
m31100| Wed Jun 13 22:33:22 [conn2] build index done. scanned 0 total records. 0 secs
could not find getLastError object : "getlasterror failed: { \"errmsg\" : \"need to login\", \"ok\" : 0 }"
m31200| Wed Jun 13 22:33:22 [FileAllocator] allocating new datafile /data/db/d2-0/admin.ns, filling with zeroes...
{
"user" : "foo",
"readOnly" : false,
"pwd" : "3563025c1e89c7ad43fb63fcbcf1c3c6",
"_id" : ObjectId("4fd95b82e204cf4c84a2c191")
}
m31200| Wed Jun 13 22:33:22 [FileAllocator] done allocating datafile /data/db/d2-0/admin.ns, size: 16MB, took 0.044 secs
m31200| Wed Jun 13 22:33:22 [FileAllocator] allocating new datafile /data/db/d2-0/admin.0, filling with zeroes...
m31200| Wed Jun 13 22:33:22 [FileAllocator] done allocating datafile /data/db/d2-0/admin.0, size: 16MB, took 0.035 secs
m31200| Wed Jun 13 22:33:22 [conn2] datafileheader::init initializing /data/db/d2-0/admin.0 n:0
m31200| Wed Jun 13 22:33:22 [conn2] build index admin.system.users { _id: 1 }
m31200| Wed Jun 13 22:33:22 [conn2] build index done. scanned 0 total records. 0 secs
could not find getLastError object : "getlasterror failed: { \"errmsg\" : \"need to login\", \"ok\" : 0 }"
m31000| Wed Jun 13 22:33:22 [conn] authenticate db: test { authenticate: 1.0, user: "bar", nonce: "828fc7ec460fa788", key: "2f5c9cbc8005e7fce3c193b9ad6f09bc" }
{ "dbname" : "test", "user" : "bar", "readOnly" : false, "ok" : 1 }
testing map reduce
m31000| range.universal(): 1
m31200| Wed Jun 13 22:33:22 [conn13] CMD: drop test.tmp.mr.foo_0_inc
m31100| Wed Jun 13 22:33:22 [conn9] CMD: drop test.tmp.mr.foo_0_inc
m31200| Wed Jun 13 22:33:22 [conn13] build index test.tmp.mr.foo_0_inc { 0: 1 }
m31100| Wed Jun 13 22:33:22 [conn9] build index test.tmp.mr.foo_0_inc { 0: 1 }
m31200| Wed Jun 13 22:33:22 [conn13] build index done. scanned 0 total records. 0 secs
m31100| Wed Jun 13 22:33:22 [conn9] build index done. scanned 0 total records. 0 secs
m31200| Wed Jun 13 22:33:22 [conn13] CMD: drop test.tmp.mr.foo_0
m31100| Wed Jun 13 22:33:22 [conn9] CMD: drop test.tmp.mr.foo_0
m31100| Wed Jun 13 22:33:22 [conn9] build index test.tmp.mr.foo_0 { _id: 1 }
m31200| Wed Jun 13 22:33:22 [conn13] build index test.tmp.mr.foo_0 { _id: 1 }
m31100| Wed Jun 13 22:33:22 [conn9] build index done. scanned 0 total records. 0 secs
m31200| Wed Jun 13 22:33:22 [conn13] build index done. scanned 0 total records. 0 secs
m31201| Wed Jun 13 22:33:22 [FileAllocator] allocating new datafile /data/db/d2-1/admin.ns, filling with zeroes...
m31202| Wed Jun 13 22:33:22 [FileAllocator] allocating new datafile /data/db/d2-2/admin.ns, filling with zeroes...
m31101| Wed Jun 13 22:33:23 [FileAllocator] allocating new datafile /data/db/d1-1/admin.ns, filling with zeroes...
m31102| Wed Jun 13 22:33:23 [FileAllocator] allocating new datafile /data/db/d1-2/admin.ns, filling with zeroes...
m31201| Wed Jun 13 22:33:23 [FileAllocator] done allocating datafile /data/db/d2-1/admin.ns, size: 16MB, took 0.391 secs
m31202| Wed Jun 13 22:33:23 [FileAllocator] done allocating datafile /data/db/d2-2/admin.ns, size: 16MB, took 0.379 secs
m31201| Wed Jun 13 22:33:23 [FileAllocator] allocating new datafile /data/db/d2-1/admin.0, filling with zeroes...
m31202| Wed Jun 13 22:33:23 [FileAllocator] allocating new datafile /data/db/d2-2/admin.0, filling with zeroes...
m31101| Wed Jun 13 22:33:23 [FileAllocator] done allocating datafile /data/db/d1-1/admin.ns, size: 16MB, took 0.395 secs
m31101| Wed Jun 13 22:33:23 [FileAllocator] allocating new datafile /data/db/d1-1/admin.0, filling with zeroes...
m31202| Wed Jun 13 22:33:23 [FileAllocator] done allocating datafile /data/db/d2-2/admin.0, size: 16MB, took 0.457 secs
m31202| Wed Jun 13 22:33:23 [rsSync] datafileheader::init initializing /data/db/d2-2/admin.0 n:0
m31202| Wed Jun 13 22:33:23 [rsSync] build index admin.system.users { _id: 1 }
m31102| Wed Jun 13 22:33:23 [FileAllocator] done allocating datafile /data/db/d1-2/admin.ns, size: 16MB, took 0.464 secs
m31201| Wed Jun 13 22:33:23 [FileAllocator] done allocating datafile /data/db/d2-1/admin.0, size: 16MB, took 0.46 secs
m31201| Wed Jun 13 22:33:23 [rsSync] datafileheader::init initializing /data/db/d2-1/admin.0 n:0
m31201| Wed Jun 13 22:33:23 [rsSync] build index admin.system.users { _id: 1 }
m31102| Wed Jun 13 22:33:23 [FileAllocator] allocating new datafile /data/db/d1-2/admin.0, filling with zeroes...
m31101| Wed Jun 13 22:33:23 [FileAllocator] done allocating datafile /data/db/d1-1/admin.0, size: 16MB, took 0.175 secs
m31101| Wed Jun 13 22:33:23 [rsSync] datafileheader::init initializing /data/db/d1-1/admin.0 n:0
m31101| Wed Jun 13 22:33:23 [rsSync] build index admin.system.users { _id: 1 }
m31101| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0.159 secs
m31202| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0.269 secs
m31201| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0.269 secs
m31202| Wed Jun 13 22:33:24 [rsSync] build index test.tmp.mr.foo_0_inc { _id: 1 }
m31101| Wed Jun 13 22:33:24 [rsSync] build index test.tmp.mr.foo_0_inc { _id: 1 }
m31202| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Wed Jun 13 22:33:24 [rsSync] info: creating collection test.tmp.mr.foo_0_inc on add index
m31202| Wed Jun 13 22:33:24 [rsSync] build index test.tmp.mr.foo_0_inc { 0: 1 }
m31101| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Wed Jun 13 22:33:24 [rsSync] info: creating collection test.tmp.mr.foo_0_inc on add index
m31201| Wed Jun 13 22:33:24 [rsSync] build index test.tmp.mr.foo_0_inc { _id: 1 }
m31101| Wed Jun 13 22:33:24 [rsSync] build index test.tmp.mr.foo_0_inc { 0: 1 }
m31202| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Wed Jun 13 22:33:24 [rsSync]
m31202| debug have W lock but w would suffice for command create
m31101| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31202| Wed Jun 13 22:33:24 [rsSync] build index test.tmp.mr.foo_0 { _id: 1 }
m31201| Wed Jun 13 22:33:24 [rsSync] info: creating collection test.tmp.mr.foo_0_inc on add index
m31101| Wed Jun 13 22:33:24 [rsSync]
m31101| debug have W lock but w would suffice for command create
m31201| Wed Jun 13 22:33:24 [rsSync] build index test.tmp.mr.foo_0_inc { 0: 1 }
m31101| Wed Jun 13 22:33:24 [rsSync] build index test.tmp.mr.foo_0 { _id: 1 }
m31202| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31101| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31201| Wed Jun 13 22:33:24 [rsSync]
m31201| debug have W lock but w would suffice for command create
m31201| Wed Jun 13 22:33:24 [rsSync] build index test.tmp.mr.foo_0 { _id: 1 }
m31201| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Wed Jun 13 22:33:24 [FileAllocator] done allocating datafile /data/db/d1-2/admin.0, size: 16MB, took 0.285 secs
m31102| Wed Jun 13 22:33:24 [rsSync] datafileheader::init initializing /data/db/d1-2/admin.0 n:0
m31102| Wed Jun 13 22:33:24 [rsSync] build index admin.system.users { _id: 1 }
m31102| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Wed Jun 13 22:33:24 [rsSync] build index test.tmp.mr.foo_0_inc { _id: 1 }
m31102| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Wed Jun 13 22:33:24 [rsSync] info: creating collection test.tmp.mr.foo_0_inc on add index
m31102| Wed Jun 13 22:33:24 [rsSync] build index test.tmp.mr.foo_0_inc { 0: 1 }
m31102| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Wed Jun 13 22:33:24 [rsSync]
m31102| debug have W lock but w would suffice for command create
m31102| Wed Jun 13 22:33:24 [rsSync] build index test.tmp.mr.foo_0 { _id: 1 }
m31102| Wed Jun 13 22:33:24 [rsSync] build index done. scanned 0 total records. 0 secs
m31100| Wed Jun 13 22:33:25 [conn9] 26600/67375 39%
m31200| Wed Jun 13 22:33:25 [conn13] 27500/32625 84%
m31200| Wed Jun 13 22:33:28 [conn13] 13700/32625 41%
m31201| Wed Jun 13 22:33:28 [conn9] end connection 184.173.149.242:59529 (10 connections now open)
m31201| Wed Jun 13 22:33:28 [initandlisten] connection accepted from 184.173.149.242:38341 #14 (11 connections now open)
m31201| Wed Jun 13 22:33:28 [conn14] authenticate db: local { authenticate: 1, nonce: "b70b1f78e76b7b35", user: "__system", key: "0d17cb77bbd7b8acd2fec45ce554a6ca" }
m31101| Wed Jun 13 22:33:28 [conn15] end connection 184.173.149.242:56626 (11 connections now open)
m31101| Wed Jun 13 22:33:28 [initandlisten] connection accepted from 184.173.149.242:50999 #17 (12 connections now open)
m31101| Wed Jun 13 22:33:28 [conn17] authenticate db: local { authenticate: 1, nonce: "ead3212d0c943f99", user: "__system", key: "6e86508f5aaa45d83aa9cea4ee0aa5ee" }
m31100| Wed Jun 13 22:33:28 [conn9] 60000/67375 89%
m31200| Wed Jun 13 22:33:30 [conn14] end connection 184.173.149.242:41211 (18 connections now open)
m31200| Wed Jun 13 22:33:30 [initandlisten] connection accepted from 184.173.149.242:39059 #35 (19 connections now open)
m31200| Wed Jun 13 22:33:30 [conn35] authenticate db: local { authenticate: 1, nonce: "16c3b68c0069d111", user: "__system", key: "bb89bdf7b95edde9b32e0ba65003f84f" }
m31100| Wed Jun 13 22:33:30 [conn36] end connection 184.173.149.242:42920 (19 connections now open)
m31100| Wed Jun 13 22:33:30 [initandlisten] connection accepted from 184.173.149.242:44966 #38 (21 connections now open)
m31100| Wed Jun 13 22:33:30 [conn38] authenticate db: local { authenticate: 1, nonce: "b8691651c699af4a", user: "__system", key: "5aaf6eca7adbbc5b4ff8e45cbf3df8b6" }
m31200| Wed Jun 13 22:33:30 [conn15] end connection 184.173.149.242:41213 (18 connections now open)
m31200| Wed Jun 13 22:33:30 [initandlisten] connection accepted from 184.173.149.242:39061 #36 (19 connections now open)
m31200| Wed Jun 13 22:33:30 [conn36] authenticate db: local { authenticate: 1, nonce: "e93bcd1f43d4e1b9", user: "__system", key: "da1339b702e0082ae44555654a02b395" }
m31100| Wed Jun 13 22:33:30 [conn37] end connection 184.173.149.242:42922 (19 connections now open)
m31100| Wed Jun 13 22:33:30 [initandlisten] connection accepted from 184.173.149.242:44968 #39 (20 connections now open)
m31100| Wed Jun 13 22:33:30 [conn39] authenticate db: local { authenticate: 1, nonce: "c266ef8e1e26900", user: "__system", key: "0b539f18a5c44af57728af2fd78c6d94" }
m31200| Wed Jun 13 22:33:31 [conn13] 32000/32625 98%
m31200| Wed Jun 13 22:33:31 [conn13]
m31200| debug have W lock but w would suffice for command drop
m31200| Wed Jun 13 22:33:31 [conn13] CMD: drop test.tmp.mrs.foo_1339644802_0
m31200| Wed Jun 13 22:33:31 [conn13]
m31200| debug have W lock but w would suffice for command drop
m31200| Wed Jun 13 22:33:31 [conn13] CMD: drop test.tmp.mr.foo_0
m31200| Wed Jun 13 22:33:31 [conn13] CMD: drop test.tmp.mr.foo_0
m31200| Wed Jun 13 22:33:31 [conn13] CMD: drop test.tmp.mr.foo_0_inc
m31200| Wed Jun 13 22:33:31 [conn13] command test.$cmd command: { mapreduce: "foo", map: function () {
m31200| emit(this.x, 1);
m31200| }, reduce: function (key, values) {
m31200| return values.length;
m31200| }, out: "tmp.mrs.foo_1339644802_0", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 32952 locks(micros) W:4529 r:2176877 w:11635372 reslen:148 8263ms
m31202| Wed Jun 13 22:33:31 [rsSync]
m31202| debug have W lock but w would suffice for command drop
m31202| Wed Jun 13 22:33:31 [rsSync] CMD: drop test.tmp.mr.foo_0_inc
m31201| Wed Jun 13 22:33:31 [rsSync]
m31201| debug have W lock but w would suffice for command drop
m31201| Wed Jun 13 22:33:31 [rsSync] CMD: drop test.tmp.mr.foo_0_inc
m31100| Wed Jun 13 22:33:32 [conn9] 11700/67375 17%
m31200| Wed Jun 13 22:33:34 [clientcursormon] mem (MB) res:110 virt:445 mapped:176
m31201| Wed Jun 13 22:33:34 [clientcursormon] mem (MB) res:83 virt:414 mapped:192
m31202| Wed Jun 13 22:33:34 [clientcursormon] mem (MB) res:82 virt:403 mapped:192
m31100| Wed Jun 13 22:33:35 [conn9] 31400/67375 46%
m31100| Wed Jun 13 22:33:38 [conn9] 48600/67375 72%
m31100| Wed Jun 13 22:33:40 [conn9]
m31100| debug have W lock but w would suffice for command drop
m31100| Wed Jun 13 22:33:40 [conn9] CMD: drop test.tmp.mrs.foo_1339644802_0
m31100| Wed Jun 13 22:33:40 [conn9]
m31100| debug have W lock but w would suffice for command drop
m31100| Wed Jun 13 22:33:40 [conn9] CMD: drop test.tmp.mr.foo_0
m31100| Wed Jun 13 22:33:40 [conn9] CMD: drop test.tmp.mr.foo_0
m31100| Wed Jun 13 22:33:40 [conn9] CMD: drop test.tmp.mr.foo_0_inc
m31100| Wed Jun 13 22:33:40 [conn9] command test.$cmd command: { mapreduce: "foo", map: function () {
m31100| emit(this.x, 1);
m31100| }, reduce: function (key, values) {
m31100| return values.length;
m31100| }, out: "tmp.mrs.foo_1339644802_0", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 68049 locks(micros) W:4614 r:4812346 w:26220239 reslen:148 17989ms
m31100| Wed Jun 13 22:33:40 [conn9] CMD: drop test.tmp.mr.foo_1
m31100| Wed Jun 13 22:33:40 [conn9] build index test.tmp.mr.foo_1 { _id: 1 }
m31100| Wed Jun 13 22:33:40 [conn9] build index done. scanned 0 total records. 0 secs
m31101| Wed Jun 13 22:33:40 [rsSync]
m31101| debug have W lock but w would suffice for command drop
m31101| Wed Jun 13 22:33:40 [rsSync] CMD: drop test.tmp.mr.foo_0_inc
m31102| Wed Jun 13 22:33:40 [rsSync]
m31102| debug have W lock but w would suffice for command drop
m31102| Wed Jun 13 22:33:40 [rsSync] CMD: drop test.tmp.mr.foo_0_inc
m31101| Wed Jun 13 22:33:40 [rsSync]
m31101| debug have W lock but w would suffice for command create
m31101| Wed Jun 13 22:33:40 [rsSync] build index test.tmp.mr.foo_1 { _id: 1 }
m31102| Wed Jun 13 22:33:40 [rsSync]
m31102| debug have W lock but w would suffice for command create
m31101| Wed Jun 13 22:33:40 [rsSync] build index done. scanned 0 total records. 0 secs
m31102| Wed Jun 13 22:33:40 [rsSync] build index test.tmp.mr.foo_1 { _id: 1 }
m31102| Wed Jun 13 22:33:40 [rsSync] build index done. scanned 0 total records. 0 secs
m31100| Wed Jun 13 22:33:40 [conn9] ChunkManager: time to load chunks for test.foo: 8ms sequenceNumber: 2 version: 3|11||4fd95b47e24b46bcab13cf46 based on: (empty)
m31100| Wed Jun 13 22:33:40 [conn9] starting new replica set monitor for replica set d1 with seed of tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31100| Wed Jun 13 22:33:40 [conn9] successfully connected to seed tp2.10gen.cc:31100 for replica set d1
m31100| Wed Jun 13 22:33:40 [initandlisten] connection accepted from 184.173.149.242:44969 #40 (21 connections now open)
m31100| Wed Jun 13 22:33:40 [conn9] changing hosts to { 0: "tp2.10gen.cc:31100", 1: "tp2.10gen.cc:31102", 2: "tp2.10gen.cc:31101" } from d1/
m31100| Wed Jun 13 22:33:40 [conn9] trying to add new host tp2.10gen.cc:31100 to replica set d1
m31100| Wed Jun 13 22:33:40 [initandlisten] connection accepted from 184.173.149.242:44970 #41 (22 connections now open)
m31100| Wed Jun 13 22:33:40 [conn9] successfully connected to new host tp2.10gen.cc:31100 in replica set d1
m31100| Wed Jun 13 22:33:40 [conn9] trying to add new host tp2.10gen.cc:31101 to replica set d1
m31100| Wed Jun 13 22:33:40 [conn9] successfully connected to new host tp2.10gen.cc:31101 in replica set d1
m31100| Wed Jun 13 22:33:40 [conn9] trying to add new host tp2.10gen.cc:31102 to replica set d1
m31101| Wed Jun 13 22:33:40 [initandlisten] connection accepted from 184.173.149.242:51006 #18 (13 connections now open)
m31100| Wed Jun 13 22:33:40 [conn9] successfully connected to new host tp2.10gen.cc:31102 in replica set d1
m31102| Wed Jun 13 22:33:40 [initandlisten] connection accepted from 184.173.149.242:55954 #16 (12 connections now open)
m31100| Wed Jun 13 22:33:40 [initandlisten] connection accepted from 184.173.149.242:44973 #42 (23 connections now open)
m31100| Wed Jun 13 22:33:40 [conn42] authenticate db: local { authenticate: 1, nonce: "4c0a6bc48c930e90", user: "__system", key: "c59d22b7cbd60a57b26bf491b1fb7073" }
m31100| Wed Jun 13 22:33:40 [conn40] end connection 184.173.149.242:44969 (22 connections now open)
m31100| Wed Jun 13 22:33:40 [conn9] Primary for replica set d1 changed to tp2.10gen.cc:31100
m31101| Wed Jun 13 22:33:40 [initandlisten] connection accepted from 184.173.149.242:51009 #19 (14 connections now open)
m31101| Wed Jun 13 22:33:40 [conn19] authenticate db: local { authenticate: 1, nonce: "faf6d94907e175f0", user: "__system", key: "7a548542b6035c83ce9a917443241980" }
m31102| Wed Jun 13 22:33:40 [initandlisten] connection accepted from 184.173.149.242:55957 #17 (13 connections now open)
m31102| Wed Jun 13 22:33:40 [conn17] authenticate db: local { authenticate: 1, nonce: "5fadba2cbd0402b", user: "__system", key: "9b8500c55277a542ddb43b8a04108f95" }
m31100| Wed Jun 13 22:33:40 [conn9] replica set monitor for replica set d1 started, address is d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102
m31100| Wed Jun 13 22:33:40 [initandlisten] connection accepted from 184.173.149.242:44976 #43 (23 connections now open)
m31100| Wed Jun 13 22:33:40 [conn43] authenticate db: local { authenticate: 1, nonce: "10fadb04119c1aac", user: "__system", key: "98fbd8640d1a5a432553a6a1b33e9043" }
m31200| Wed Jun 13 22:33:40 [initandlisten] connection accepted from 184.173.149.242:39071 #37 (20 connections now open)
m31200| Wed Jun 13 22:33:40 [conn37] authenticate db: local { authenticate: 1, nonce: "43b607e8749b3c70", user: "__system", key: "d217cd7f48eba69a8c44b813c7905263" }
m31100| Wed Jun 13 22:33:41 [conn42] getmore test.tmp.mrs.foo_1339644802_0 query: { query: {}, orderby: { _id: 1 } } cursorid:6319334232901402024 ntoreturn:0 keyUpdates:0 numYields: 57 locks(micros) r:731008 nreturned:67274 reslen:2220062 731ms
m31202| Wed Jun 13 22:33:42 [conn12] end connection 184.173.149.242:43004 (10 connections now open)
m31202| Wed Jun 13 22:33:42 [initandlisten] connection accepted from 184.173.149.242:56960 #14 (12 connections now open)
m31202| Wed Jun 13 22:33:42 [conn14] authenticate db: local { authenticate: 1, nonce: "12ffc842891a6d80", user: "__system", key: "37b46ec0c7150d4bf54b35a311487c49" }
m31102| Wed Jun 13 22:33:42 [conn14] end connection 184.173.149.242:52816 (12 connections now open)
m31102| Wed Jun 13 22:33:42 [initandlisten] connection accepted from 184.173.149.242:55961 #18 (13 connections now open)
m31102| Wed Jun 13 22:33:42 [conn18] authenticate db: local { authenticate: 1, nonce: "867f52d89925fab4", user: "__system", key: "a658210580bf1aafd2265f44ff877b64" }
m31200| Wed Jun 13 22:33:42 [conn11] getmore test.tmp.mrs.foo_1339644802_0 query: { query: {}, orderby: { _id: 1 } } cursorid:4920575551960104359 ntoreturn:0 keyUpdates:0 numYields: 34 locks(micros) r:337324 nreturned:32524 reslen:1073312 337ms
m31202| Wed Jun 13 22:33:44 [conn13] end connection 184.173.149.242:43006 (10 connections now open)
m31202| Wed Jun 13 22:33:44 [initandlisten] connection accepted from 184.173.149.242:56962 #15 (11 connections now open)
m31202| Wed Jun 13 22:33:44 [conn15] authenticate db: local { authenticate: 1, nonce: "d292cdb0837afd02", user: "__system", key: "06fee42f676a970d818507553127fc30" }
m31102| Wed Jun 13 22:33:44 [conn15] end connection 184.173.149.242:52818 (12 connections now open)
m31102| Wed Jun 13 22:33:44 [initandlisten] connection accepted from 184.173.149.242:55963 #19 (13 connections now open)
m31102| Wed Jun 13 22:33:44 [conn19] authenticate db: local { authenticate: 1, nonce: "72847836b5a86df9", user: "__system", key: "e426fda7ffd3b809cbeefdc07ff2c342" }
m31201| Wed Jun 13 22:33:44 [conn13] end connection 184.173.149.242:59565 (10 connections now open)
m31201| Wed Jun 13 22:33:44 [initandlisten] connection accepted from 184.173.149.242:38360 #15 (11 connections now open)
m31201| Wed Jun 13 22:33:44 [conn15] authenticate db: local { authenticate: 1, nonce: "77b2a6cccbdc7bfd", user: "__system", key: "b7531bc9505df8e30af894f2ae453d67" }
m31101| Wed Jun 13 22:33:44 [conn16] end connection 184.173.149.242:56662 (13 connections now open)
m31101| Wed Jun 13 22:33:44 [initandlisten] connection accepted from 184.173.149.242:51018 #20 (14 connections now open)
m31101| Wed Jun 13 22:33:44 [conn20] authenticate db: local { authenticate: 1, nonce: "f602b1e63c0cc68e", user: "__system", key: "d88ef9286ef7cfdd21554531ccbcbcef" }
m29000| Wed Jun 13 22:33:52 [initandlisten] connection accepted from 184.173.149.242:58763 #17 (15 connections now open)
m29000| Wed Jun 13 22:33:52 [conn17] authenticate db: local { authenticate: 1, nonce: "f8c052f691275e02", user: "__system", key: "d6bd228aea481e4b46834a6618fc3f0d" }
m30999| Wed Jun 13 22:33:52 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' acquired, ts : 4fd95ba093454f4c31525101
m30999| Wed Jun 13 22:33:52 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:33:52 [Balancer] d1 maxSize: 0 currSize: 176 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:33:52 [Balancer] d2 maxSize: 0 currSize: 176 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:33:52 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:33:52 [Balancer] d1
m30999| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_38475.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 38475.0 }, max: { x: 50152.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_50152.0", lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 50152.0 }, max: { x: 61501.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_61501.0", lastmod: Timestamp 3000|6, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 61501.0 }, max: { x: 73876.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_73876.0", lastmod: Timestamp 3000|8, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 73876.0 }, max: { x: 85769.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_85769.0", lastmod: Timestamp 3000|10, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 85769.0 }, max: { x: 97359.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_97359.0", lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 97359.0 }, max: { x: MaxKey }, shard: "d1" }
m30999| Wed Jun 13 22:33:52 [Balancer] d2
m30999| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 16732.0 }, shard: "d2" }
m30999| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_16732.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 16732.0 }, max: { x: 27969.0 }, shard: "d2" }
m30999| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_27969.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 27969.0 }, max: { x: 38475.0 }, shard: "d2" }
m30999| Wed Jun 13 22:33:52 [Balancer] ----
m30999| Wed Jun 13 22:33:52 [Balancer] chose [d1] to [d2] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m30999| Wed Jun 13 22:33:52 [Balancer] dev: lastError==0 won't report:right object ({}) doesn't have full shard key ({ x: 1.0 })
m30999| Wed Jun 13 22:33:52 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644701:1804289383' unlocked.
m30999| Wed Jun 13 22:33:52 [Balancer] scoped connection to tp2.10gen.cc:29000 not being returned to the pool
m30999| Wed Jun 13 22:33:52 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ x: 1.0 })
m29000| Wed Jun 13 22:33:52 [conn6] end connection 184.173.149.242:55431 (14 connections now open)
m29000| Wed Jun 13 22:33:52 [initandlisten] connection accepted from 184.173.149.242:58764 #18 (15 connections now open)
m29000| Wed Jun 13 22:33:52 [conn18] authenticate db: local { authenticate: 1, nonce: "53921d3f8537ae4e", user: "__system", key: "096d6d80d269010b7f8958db5459e570" }
m31000| Wed Jun 13 22:33:52 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' acquired, ts : 4fd95ba0e24b46bcab13cf4d
m31000| Wed Jun 13 22:33:52 [Balancer] ---- ShardInfoMap
m31000| Wed Jun 13 22:33:52 [Balancer] d1 maxSize: 0 currSize: 176 draining: 0 hasOpsQueued: 0
m31000| Wed Jun 13 22:33:52 [Balancer] d2 maxSize: 0 currSize: 176 draining: 0 hasOpsQueued: 0
m31000| Wed Jun 13 22:33:52 [Balancer] ---- ShardToChunksMap
m31000| Wed Jun 13 22:33:52 [Balancer] d1
m31000| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 0.0 }, max: { x: 5850.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_38475.0", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 38475.0 }, max: { x: 50152.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_50152.0", lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 50152.0 }, max: { x: 61501.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_61501.0", lastmod: Timestamp 3000|6, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 61501.0 }, max: { x: 73876.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_73876.0", lastmod: Timestamp 3000|8, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 73876.0 }, max: { x: 85769.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_85769.0", lastmod: Timestamp 3000|10, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 85769.0 }, max: { x: 97359.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_97359.0", lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 97359.0 }, max: { x: MaxKey }, shard: "d1" }
m31000| Wed Jun 13 22:33:52 [Balancer] d2
m31000| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_5850.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 5850.0 }, max: { x: 16732.0 }, shard: "d2" }
m31000| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_16732.0", lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 16732.0 }, max: { x: 27969.0 }, shard: "d2" }
m31000| Wed Jun 13 22:33:52 [Balancer] { _id: "test.foo-x_27969.0", lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: 27969.0 }, max: { x: 38475.0 }, shard: "d2" }
m31000| Wed Jun 13 22:33:52 [Balancer] ----
m31000| Wed Jun 13 22:33:52 [Balancer] chose [d1] to [d2] { _id: "test.foo-x_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95b47e24b46bcab13cf46'), ns: "test.foo", min: { x: MinKey }, max: { x: 0.0 }, shard: "d1" }
m31000| Wed Jun 13 22:33:52 [Balancer] dev: lastError==0 won't report:right object ({}) doesn't have full shard key ({ x: 1.0 })
m31000| Wed Jun 13 22:33:52 [Balancer] distributed lock 'balancer/tp2.10gen.cc:31000:1339644702:1804289383' unlocked.
m31000| Wed Jun 13 22:33:52 [Balancer] scoped connection to tp2.10gen.cc:29000 not being returned to the pool
m31000| Wed Jun 13 22:33:52 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ x: 1.0 })
m29000| Wed Jun 13 22:33:52 [conn11] end connection 184.173.149.242:55438 (14 connections now open)
m31100| Wed Jun 13 22:33:55 [conn9]
m31100| debug have W lock but w would suffice for command drop
m31100| Wed Jun 13 22:33:55 [conn9] CMD: drop test.mrout
m31100| Wed Jun 13 22:33:55 [conn9]
m31100| debug have W lock but w would suffice for command drop
m31100| Wed Jun 13 22:33:55 [conn9] CMD: drop test.tmp.mr.foo_1
m31100| Wed Jun 13 22:33:55 [conn9] CMD: drop test.tmp.mr.foo_1
m31100| Wed Jun 13 22:33:55 [conn9] CMD: drop test.tmp.mr.foo_1
m31100| Wed Jun 13 22:33:55 [conn9] command test.$cmd command: { mapreduce.shardedfinish: { mapreduce: "foo", map: function () {
m31100| emit(this.x, 1);
m31100| }, reduce: function (key, values) {
m31100| return values.length;
m31100| }, out: "mrout" }, inputDB: "test", shardedOutputCollection: "tmp.mrs.foo_1339644802_0", shards: { d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102: { result: "tmp.mrs.foo_1339644802_0", timeMillis: 17987, counts: { input: 67375, emit: 67375, reduce: 0, output: 67375 }, ok: 1.0 }, d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202: { result: "tmp.mrs.foo_1339644802_0", timeMillis: 8261, counts: { input: 32625, emit: 32625, reduce: 0, output: 32625 }, ok: 1.0 } }, shardCounts: { d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102: { input: 67375, emit: 67375, reduce: 0, output: 67375 }, d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202: { input: 32625, emit: 32625, reduce: 0, output: 32625 } }, counts: { emit: 100000, input: 100000, output: 100000, reduce: 0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:7244 r:4812364 w:41629360 reslen:150 15084ms
m31100| Wed Jun 13 22:33:55 [conn8] CMD: drop test.tmp.mrs.foo_1339644802_0
m31200| Wed Jun 13 22:33:55 [conn8] CMD: drop test.tmp.mrs.foo_1339644802_0
m31201| Wed Jun 13 22:33:55 [rsSync]
m31201| debug have W lock but w would suffice for command drop
m31201| Wed Jun 13 22:33:55 [rsSync] CMD: drop test.tmp.mrs.foo_1339644802_0
{
"result" : "mrout",
"counts" : {
"input" : NumberLong(100000),
"emit" : NumberLong(100000),
"reduce" : NumberLong(0),
"output" : NumberLong(100000)
},
"timeMillis" : 33089,
"timing" : {
"shardProcessing" : 18003,
"postProcessing" : 15086
},
"shardCounts" : {
"d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102" : {
"input" : 67375,
"emit" : 67375,
"reduce" : 0,
"output" : 67375
},
"d2/tp2.10gen.cc:31200,tp2.10gen.cc:31201,tp2.10gen.cc:31202" : {
"input" : 32625,
"emit" : 32625,
"reduce" : 0,
"output" : 32625
}
},
"postProcessCounts" : {
"d1/tp2.10gen.cc:31100,tp2.10gen.cc:31101,tp2.10gen.cc:31102" : {
"input" : NumberLong(100000),
"reduce" : NumberLong(0),
"output" : NumberLong(100000)
}
},
"ok" : 1
}
m31202| Wed Jun 13 22:33:55 [rsSync]
m31202| debug have W lock but w would suffice for command drop
m31202| Wed Jun 13 22:33:55 [rsSync] CMD: drop test.tmp.mrs.foo_1339644802_0
Wed Jun 13 22:33:55 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongodump --host 127.0.0.1:31000 -d test -u bar -p baz
m31101| Wed Jun 13 22:33:55 [rsSync]
m31101| debug have W lock but w would suffice for command drop
m31101| Wed Jun 13 22:33:55 [rsSync] CMD: drop test.tmp.mrs.foo_1339644802_0
m31102| Wed Jun 13 22:33:55 [rsSync]
m31102| debug have W lock but w would suffice for command drop
m31102| Wed Jun 13 22:33:55 [rsSync] CMD: drop test.tmp.mrs.foo_1339644802_0
sh10831| connected to: 127.0.0.1:31000
m31000| Wed Jun 13 22:33:56 [mongosMain] connection accepted from 127.0.0.1:55418 #2 (2 connections now open)
m31000| Wed Jun 13 22:33:56 [conn] authenticate db: test { authenticate: 1, nonce: "45ed019d46bc403a", user: "bar", key: "b25723ea4ff32b5b37b7baf8245be4b3" }
sh10831| Wed Jun 13 22:33:56 DATABASE: test to dump/test
m31100| Wed Jun 13 22:33:56 [initandlisten] connection accepted from 184.173.149.242:44987 #44 (24 connections now open)
m31100| Wed Jun 13 22:33:56 [conn44] authenticate db: local { authenticate: 1, nonce: "fa6a4087199f7fdb", user: "__system", key: "d938a08d2f38677b5adb0e36833a48ae" }
m31200| Wed Jun 13 22:33:56 [initandlisten] connection accepted from 184.173.149.242:39082 #38 (21 connections now open)
m31200| Wed Jun 13 22:33:56 [conn38] authenticate db: local { authenticate: 1, nonce: "560355da567ab34e", user: "__system", key: "8b02244387e87e013fcdc1e399568704" }
m31102| Wed Jun 13 22:33:56 [initandlisten] connection accepted from 184.173.149.242:55971 #20 (14 connections now open)
m31102| Wed Jun 13 22:33:56 [conn20] authenticate db: local { authenticate: 1, nonce: "d4c31683e4afba43", user: "__system", key: "29758c0702eafb1956aac628b21c4317" }
sh10831| Wed Jun 13 22:33:56 test.foo to dump/test/foo.bson
m31000| range.universal(): 1
m31201| Wed Jun 13 22:33:57 [initandlisten] connection accepted from 184.173.149.242:38368 #16 (12 connections now open)
m31201| Wed Jun 13 22:33:57 [conn16] authenticate db: local { authenticate: 1, nonce: "61f52c61a0487644", user: "__system", key: "c1cbf5c1684d4eef3d4dd230c72d3916" }
m31000| range.universal(): 1
m31201| Wed Jun 13 22:33:57 [conn12] getmore test.foo query: { query: {}, $snapshot: true } cursorid:7080353861672500372 ntoreturn:0 keyUpdates:0 numYields: 39 locks(micros) r:316377 nreturned:32524 reslen:3154848 316ms
m31102| Wed Jun 13 22:33:57 [conn9] getmore test.foo query: { query: {}, $snapshot: true } cursorid:8647549877083831444 ntoreturn:0 keyUpdates:0 numYields: 42 locks(micros) r:496387 nreturned:43240 reslen:4194300 496ms
m31102| Wed Jun 13 22:33:58 [conn9] getmore test.foo query: { query: {}, $snapshot: true } cursorid:8647549877083831444 ntoreturn:0 keyUpdates:0 numYields: 20 locks(micros) r:788621 nreturned:24034 reslen:2331318 292ms
sh10831| Wed Jun 13 22:33:58 100000 objects
sh10831| Wed Jun 13 22:33:58 Metadata for test.foo to dump/test/foo.metadata.json
sh10831| Wed Jun 13 22:33:58 test.system.users to dump/test/system.users.bson
sh10831| Wed Jun 13 22:33:58 2 objects
sh10831| Wed Jun 13 22:33:58 Metadata for test.system.users to dump/test/system.users.metadata.json
sh10831| Wed Jun 13 22:33:58 test.mrout to dump/test/mrout.bson
m31201| Wed Jun 13 22:33:58 [conn14] end connection 184.173.149.242:38341 (11 connections now open)
m31201| Wed Jun 13 22:33:58 [initandlisten] connection accepted from 184.173.149.242:38369 #17 (12 connections now open)
m31201| Wed Jun 13 22:33:58 [conn17] authenticate db: local { authenticate: 1, nonce: "21a53f8f8e516391", user: "__system", key: "42682389e84bee78c6f8ffc1c40fb1fc" }
m31101| Wed Jun 13 22:33:58 [conn17] end connection 184.173.149.242:50999 (13 connections now open)
m31101| Wed Jun 13 22:33:58 [initandlisten] connection accepted from 184.173.149.242:51027 #21 (14 connections now open)
m31101| Wed Jun 13 22:33:58 [conn21] authenticate db: local { authenticate: 1, nonce: "f21accf35df5d8eb", user: "__system", key: "d293300856e1cecca6d44b91f42fb946" }
m31102| Wed Jun 13 22:33:59 [conn9] getmore test.mrout query: { query: {}, $snapshot: true } cursorid:8568420194191278457 ntoreturn:0 keyUpdates:0 numYields: 95 locks(micros) r:1782648 nreturned:99899 reslen:3296687 994ms
sh10831| Wed Jun 13 22:33:59 100000 objects
sh10831| Wed Jun 13 22:33:59 Metadata for test.mrout to dump/test/mrout.metadata.json
m31000| Wed Jun 13 22:33:59 [conn] end connection 127.0.0.1:55418 (1 connection now open)
result: 0
starting read only tests
testing find that should fail
m31000| Wed Jun 13 22:33:59 [mongosMain] connection accepted from 127.0.0.1:55425 #3 (2 connections now open)
m29000| Wed Jun 13 22:33:59 [initandlisten] connection accepted from 184.173.149.242:58773 #19 (15 connections now open)
m29000| Wed Jun 13 22:33:59 [conn19] authenticate db: local { authenticate: 1, nonce: "92fbb00bb27fb79d", user: "__system", key: "12539c61b529231ae76927c635370cfc" }
logging in
m31000| Wed Jun 13 22:33:59 [conn] authenticate db: test { authenticate: 1.0, user: "sad", nonce: "7dd12d833d681dfe", key: "0ad40b5c42bc759a1fd05b3c2d4e3b16" }
{ "dbname" : "test", "user" : "sad", "readOnly" : true, "ok" : 1 }
testing find that should work
m31000| range.universal(): 1
testing write that should fail
testing read command (should succeed)
m31000| range.universal(): 1
make sure currentOp/killOp fail
testing logout (should succeed)
make sure currentOp/killOp fail again
m30999| Wed Jun 13 22:33:59 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m29000| Wed Jun 13 22:33:59 [conn3] end connection 184.173.149.242:55428 (14 connections now open)
m29000| Wed Jun 13 22:33:59 [conn4] end connection 184.173.149.242:55429 (14 connections now open)
m29000| Wed Jun 13 22:33:59 [conn7] end connection 184.173.149.242:55433 (14 connections now open)
m31101| Wed Jun 13 22:33:59 [conn8] end connection 184.173.149.242:56568 (13 connections now open)
m31101| Wed Jun 13 22:33:59 [conn9] end connection 184.173.149.242:56571 (12 connections now open)
m31100| Wed Jun 13 22:33:59 [conn26] end connection 184.173.149.242:42858 (23 connections now open)
m31100| Wed Jun 13 22:33:59 [conn28] end connection 184.173.149.242:42864 (23 connections now open)
m31100| Wed Jun 13 22:33:59 [conn27] end connection 184.173.149.242:42861 (22 connections now open)
m31200| Wed Jun 13 22:33:59 [conn27] end connection 184.173.149.242:41226 (20 connections now open)
m29000| Wed Jun 13 22:33:59 [conn17] end connection 184.173.149.242:58763 (12 connections now open)
m31102| Wed Jun 13 22:33:59 [conn7] end connection 184.173.149.242:52727 (13 connections now open)
m31102| Wed Jun 13 22:33:59 [conn8] end connection 184.173.149.242:52730 (13 connections now open)
m31200| Wed Jun 13 22:33:59 [conn29] end connection 184.173.149.242:41232 (19 connections now open)
m31202| Wed Jun 13 22:33:59 [conn9] end connection 184.173.149.242:42992 (10 connections now open)
m31202| Wed Jun 13 22:33:59 [conn10] end connection 184.173.149.242:42995 (9 connections now open)
m31200| Wed Jun 13 22:33:59 [conn28] end connection 184.173.149.242:41229 (18 connections now open)
m31201| Wed Jun 13 22:33:59 [conn11] end connection 184.173.149.242:59551 (11 connections now open)
m31201| Wed Jun 13 22:33:59 [conn10] end connection 184.173.149.242:59548 (10 connections now open)
Wed Jun 13 22:34:00 shell: stopped mongo program on port 30999
m29000| Wed Jun 13 22:34:00 got signal 15 (Terminated), will terminate after current cmd ends
m29000| Wed Jun 13 22:34:00 [interruptThread] now exiting
m29000| Wed Jun 13 22:34:00 dbexit:
m29000| Wed Jun 13 22:34:00 [interruptThread] shutdown: going to close listening sockets...
m29000| Wed Jun 13 22:34:00 [interruptThread] closing listening socket: 17
m29000| Wed Jun 13 22:34:00 [interruptThread] closing listening socket: 18
m29000| Wed Jun 13 22:34:00 [interruptThread] closing listening socket: 19
m29000| Wed Jun 13 22:34:00 [interruptThread] removing socket file: /tmp/mongodb-29000.sock
m29000| Wed Jun 13 22:34:00 [interruptThread] shutdown: going to flush diaglog...
m29000| Wed Jun 13 22:34:00 [interruptThread] shutdown: going to close sockets...
m29000| Wed Jun 13 22:34:00 [interruptThread] shutdown: waiting for fs preallocator...
m29000| Wed Jun 13 22:34:00 [interruptThread] shutdown: closing all files...
m29000| Wed Jun 13 22:34:00 [interruptThread] closeAllFiles() finished
m29000| Wed Jun 13 22:34:00 [interruptThread] shutdown: removing fs lock...
m29000| Wed Jun 13 22:34:00 dbexit: really exiting now
m31200| Wed Jun 13 22:34:00 [conn35] end connection 184.173.149.242:39059 (17 connections now open)
m31200| Wed Jun 13 22:34:00 [initandlisten] connection accepted from 184.173.149.242:39089 #39 (18 connections now open)
m31200| Wed Jun 13 22:34:00 [conn39] authenticate db: local { authenticate: 1, nonce: "ababd13a27fe5799", user: "__system", key: "9b1c7002a394addb2741c077fb000d5e" }
m31100| Wed Jun 13 22:34:00 [conn38] end connection 184.173.149.242:44966 (20 connections now open)
m31100| Wed Jun 13 22:34:00 [initandlisten] connection accepted from 184.173.149.242:44996 #45 (21 connections now open)
m31100| Wed Jun 13 22:34:00 [conn45] authenticate db: local { authenticate: 1, nonce: "a776b8e66ad373b", user: "__system", key: "b90767e5948111690d5451f9822fdb26" }
m31200| Wed Jun 13 22:34:00 [conn36] end connection 184.173.149.242:39061 (17 connections now open)
m31200| Wed Jun 13 22:34:00 [initandlisten] connection accepted from 184.173.149.242:39091 #40 (18 connections now open)
m31200| Wed Jun 13 22:34:00 [conn40] authenticate db: local { authenticate: 1, nonce: "bf521d25d958f60", user: "__system", key: "17ffe9ea7326a4cdda1c59787fcce2f3" }
m31100| Wed Jun 13 22:34:00 [conn39] end connection 184.173.149.242:44968 (20 connections now open)
m31100| Wed Jun 13 22:34:00 [initandlisten] connection accepted from 184.173.149.242:44998 #46 (21 connections now open)
m31100| Wed Jun 13 22:34:00 [conn46] authenticate db: local { authenticate: 1, nonce: "6bab0185d206b359", user: "__system", key: "c134a69076177f7a2baeb52d4331b747" }
Wed Jun 13 22:34:01 shell: stopped mongo program on port 29000
*** ShardingTest auth1 completed successfully in 139.725 seconds ***
m31000| Wed Jun 13 22:34:01 [mongosMain] dbexit: received signal 15 rc:0 received signal 15
m31101| Wed Jun 13 22:34:01 [conn5] end connection 184.173.149.242:56542 (11 connections now open)
m31101| Wed Jun 13 22:34:01 [conn11] end connection 184.173.149.242:56579 (11 connections now open)
m31201| Wed Jun 13 22:34:01 [conn12] end connection 184.173.149.242:59559 (9 connections now open)
m31200| Wed Jun 13 22:34:01 [conn8] end connection 184.173.149.242:41188 (17 connections now open)
m31202| Wed Jun 13 22:34:01 [conn5] end connection 184.173.149.242:42948 (8 connections now open)
m31201| Wed Jun 13 22:34:01 [conn5] end connection 184.173.149.242:59504 (8 connections now open)
m31102| Wed Jun 13 22:34:01 [conn9] end connection 184.173.149.242:52735 (11 connections now open)
m31102| Wed Jun 13 22:34:01 [conn5] end connection 184.173.149.242:52701 (10 connections now open)
m31202| Wed Jun 13 22:34:01 [conn11] end connection 184.173.149.242:43003 (7 connections now open)
m31101| Wed Jun 13 22:34:01 [conn10] end connection 184.173.149.242:56576 (9 connections now open)
m31200| Wed Jun 13 22:34:01 [conn13] end connection 184.173.149.242:41207 (16 connections now open)
m31100| Wed Jun 13 22:34:01 [conn6] end connection 184.173.149.242:42832 (20 connections now open)
m31100| Wed Jun 13 22:34:01 [conn8] end connection 184.173.149.242:42838 (20 connections now open)
m31100| Wed Jun 13 22:34:01 [conn31] end connection 184.173.149.242:42869 (20 connections now open)
m31100| Wed Jun 13 22:34:01 [conn9] end connection 184.173.149.242:42839 (20 connections now open)
m31100| Wed Jun 13 22:34:01 [conn44] end connection 184.173.149.242:44987 (17 connections now open)
m31200| Wed Jun 13 22:34:01 [conn6] end connection 184.173.149.242:41182 (16 connections now open)
m31100| Wed Jun 13 22:34:01 [conn30] end connection 184.173.149.242:42866 (16 connections now open)
m31102| Wed Jun 13 22:34:01 [conn20] end connection 184.173.149.242:55971 (9 connections now open)
m31200| Wed Jun 13 22:34:01 [conn38] end connection 184.173.149.242:39082 (14 connections now open)
m31201| Wed Jun 13 22:34:01 [conn16] end connection 184.173.149.242:38368 (7 connections now open)
m31200| Wed Jun 13 22:34:01 [conn34] end connection 184.173.149.242:41237 (17 connections now open)
m31100| Wed Jun 13 22:34:02 got signal 15 (Terminated), will terminate after current cmd ends
m31100| Wed Jun 13 22:34:02 [interruptThread] now exiting
m31100| Wed Jun 13 22:34:02 dbexit:
m31100| Wed Jun 13 22:34:02 [interruptThread] shutdown: going to close listening sockets...
m31100| Wed Jun 13 22:34:02 [interruptThread] closing listening socket: 29
m31100| Wed Jun 13 22:34:02 [interruptThread] closing listening socket: 31
m31100| Wed Jun 13 22:34:02 [interruptThread] closing listening socket: 34
m31100| Wed Jun 13 22:34:02 [interruptThread] removing socket file: /tmp/mongodb-31100.sock
m31100| Wed Jun 13 22:34:02 [interruptThread] shutdown: going to flush diaglog...
m31100| Wed Jun 13 22:34:02 [interruptThread] shutdown: going to close sockets...
m31100| Wed Jun 13 22:34:02 [interruptThread] shutdown: waiting for fs preallocator...
m31100| Wed Jun 13 22:34:02 [interruptThread] shutdown: closing all files...
m31100| Wed Jun 13 22:34:02 [conn42] end connection 184.173.149.242:44973 (14 connections now open)
m31101| Wed Jun 13 22:34:02 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: tp2.10gen.cc:31100
m31102| Wed Jun 13 22:34:02 [conn17] end connection 184.173.149.242:55957 (8 connections now open)
m31101| Wed Jun 13 22:34:02 [conn19] end connection 184.173.149.242:51009 (8 connections now open)
m31102| Wed Jun 13 22:34:02 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: tp2.10gen.cc:31100
m31102| Wed Jun 13 22:34:02 [conn16] end connection 184.173.149.242:55954 (7 connections now open)
m31102| Wed Jun 13 22:34:02 [conn18] end connection 184.173.149.242:55961 (7 connections now open)
m31100| Wed Jun 13 22:34:02 [conn41] end connection 184.173.149.242:44970 (14 connections now open)
m31100| Wed Jun 13 22:34:02 [conn43] end connection 184.173.149.242:44976 (14 connections now open)
m31201| Wed Jun 13 22:34:02 [conn8] end connection 184.173.149.242:59516 (6 connections now open)
m31101| Wed Jun 13 22:34:02 [conn21] end connection 184.173.149.242:51027 (7 connections now open)
m31200| Wed Jun 13 22:34:02 [conn11] end connection 184.173.149.242:41194 (12 connections now open)
m31202| Wed Jun 13 22:34:02 [conn7] end connection 184.173.149.242:42957 (6 connections now open)
m31200| Wed Jun 13 22:34:02 [conn12] end connection 184.173.149.242:41197 (12 connections now open)
m31200| Wed Jun 13 22:34:02 [conn10] end connection 184.173.149.242:41191 (12 connections now open)
m31201| Wed Jun 13 22:34:02 [conn7] end connection 184.173.149.242:59513 (5 connections now open)
m31202| Wed Jun 13 22:34:02 [conn8] end connection 184.173.149.242:42960 (5 connections now open)
m31100| Wed Jun 13 22:34:02 [conn1] end connection 184.173.149.242:42817 (14 connections now open)
m31101| Wed Jun 13 22:34:02 [conn18] end connection 184.173.149.242:51006 (6 connections now open)
m31200| Wed Jun 13 22:34:02 [conn37] end connection 184.173.149.242:39071 (9 connections now open)
m31100| Wed Jun 13 22:34:02 [interruptThread] closeAllFiles() finished
m31100| Wed Jun 13 22:34:02 [interruptThread] shutdown: removing fs lock...
m31100| Wed Jun 13 22:34:02 dbexit: really exiting now
m31101| Wed Jun 13 22:34:02 [rsHealthPoll] DBClientCursor::init call() failed
m31101| Wed Jun 13 22:34:02 [rsHealthPoll] replSet info tp2.10gen.cc:31100 is down (or slow to respond): DBClientBase::findN: transport error: tp2.10gen.cc:31100 ns: admin.$cmd query: { replSetHeartbeat: "d1", v: 1, pv: 1, checkEmpty: false, from: "tp2.10gen.cc:31101" }
m31101| Wed Jun 13 22:34:02 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state DOWN
m31101| Wed Jun 13 22:34:02 [rsMgr] not electing self, tp2.10gen.cc:31102 would veto
m31102| Wed Jun 13 22:34:02 [rsHealthPoll] DBClientCursor::init call() failed
m31102| Wed Jun 13 22:34:02 [rsHealthPoll] replSet info tp2.10gen.cc:31100 is down (or slow to respond): DBClientBase::findN: transport error: tp2.10gen.cc:31100 ns: admin.$cmd query: { replSetHeartbeat: "d1", v: 1, pv: 1, checkEmpty: false, from: "tp2.10gen.cc:31102" }
m31102| Wed Jun 13 22:34:02 [rsHealthPoll] replSet member tp2.10gen.cc:31100 is now in state DOWN
m31102| Wed Jun 13 22:34:02 [rsMgr] replSet tie 1 sleeping a little 160ms
m31102| Wed Jun 13 22:34:02 [rsMgr] replSet info electSelf 2
m31101| Wed Jun 13 22:34:02 [conn20] replSet received elect msg { replSetElect: 1, set: "d1", who: "tp2.10gen.cc:31102", whoid: 2, cfgver: 1, round: ObjectId('4fd95baaca39499b180cb214') }
m31101| Wed Jun 13 22:34:02 [conn20] replSet info voting yea for tp2.10gen.cc:31102 (2)
m31102| Wed Jun 13 22:34:02 [rsMgr] replSet elect res: { vote: 1, round: ObjectId('4fd95baaca39499b180cb214'), ok: 1.0 }
m31102| Wed Jun 13 22:34:02 [rsMgr] info PageFaultRetryableSection will not yield, already locked upon reaching
m31102| Wed Jun 13 22:34:02 [rsMgr] replSet PRIMARY
m31101| Wed Jun 13 22:34:03 got signal 15 (Terminated), will terminate after current cmd ends
m31101| Wed Jun 13 22:34:03 [interruptThread] now exiting
m31101| Wed Jun 13 22:34:03 dbexit:
m31101| Wed Jun 13 22:34:03 [interruptThread] shutdown: going to close listening sockets...
m31101| Wed Jun 13 22:34:03 [interruptThread] closing listening socket: 34
m31101| Wed Jun 13 22:34:03 [interruptThread] closing listening socket: 35
m31101| Wed Jun 13 22:34:03 [interruptThread] closing listening socket: 36
m31101| Wed Jun 13 22:34:03 [interruptThread] removing socket file: /tmp/mongodb-31101.sock
m31101| Wed Jun 13 22:34:03 [interruptThread] shutdown: going to flush diaglog...
m31101| Wed Jun 13 22:34:03 [interruptThread] shutdown: going to close sockets...
m31101| Wed Jun 13 22:34:03 [interruptThread] shutdown: waiting for fs preallocator...
m31101| Wed Jun 13 22:34:03 [interruptThread] shutdown: closing all files...
m31101| Wed Jun 13 22:34:03 [conn1] end connection 184.173.149.242:56529 (5 connections now open)
m31102| Wed Jun 13 22:34:03 [conn19] end connection 184.173.149.242:55963 (5 connections now open)
m31101| Wed Jun 13 22:34:03 [interruptThread] closeAllFiles() finished
m31101| Wed Jun 13 22:34:03 [interruptThread] shutdown: removing fs lock...
m31101| Wed Jun 13 22:34:03 dbexit: really exiting now
m31102| Wed Jun 13 22:34:04 got signal 15 (Terminated), will terminate after current cmd ends
m31102| Wed Jun 13 22:34:04 [interruptThread] now exiting
m31102| Wed Jun 13 22:34:04 dbexit:
m31102| Wed Jun 13 22:34:04 [interruptThread] shutdown: going to close listening sockets...
m31102| Wed Jun 13 22:34:04 [interruptThread] closing listening socket: 37
m31102| Wed Jun 13 22:34:04 [interruptThread] closing listening socket: 39
m31102| Wed Jun 13 22:34:04 [interruptThread] closing listening socket: 40
m31102| Wed Jun 13 22:34:04 [interruptThread] removing socket file: /tmp/mongodb-31102.sock
m31102| Wed Jun 13 22:34:04 [interruptThread] shutdown: going to flush diaglog...
m31102| Wed Jun 13 22:34:04 [interruptThread] shutdown: going to close sockets...
m31102| Wed Jun 13 22:34:04 [interruptThread] shutdown: waiting for fs preallocator...
m31102| Wed Jun 13 22:34:04 [interruptThread] shutdown: closing all files...
m31102| Wed Jun 13 22:34:04 [conn1] end connection 184.173.149.242:52690 (4 connections now open)
m31102| Wed Jun 13 22:34:04 [interruptThread] closeAllFiles() finished
m31102| Wed Jun 13 22:34:04 [interruptThread] shutdown: removing fs lock...
m31102| Wed Jun 13 22:34:04 dbexit: really exiting now
m31200| Wed Jun 13 22:34:04 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
m31200| Wed Jun 13 22:34:04 [ReplicaSetMonitorWatcher] dev: lastError==0 won't report:DBClientBase::findN: transport error: tp2.10gen.cc:31100 ns: admin.$cmd query: { ismaster: 1 }
m31200| Wed Jun 13 22:34:04 [ReplicaSetMonitorWatcher] trying reconnect to tp2.10gen.cc:31100
m31200| Wed Jun 13 22:34:04 [ReplicaSetMonitorWatcher] reconnect tp2.10gen.cc:31100 failed couldn't connect to server tp2.10gen.cc:31100
m31200| Wed Jun 13 22:34:04 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
m31200| Wed Jun 13 22:34:04 [ReplicaSetMonitorWatcher] dev: lastError==0 won't report:DBClientBase::findN: transport error: tp2.10gen.cc:31101 ns: admin.$cmd query: { ismaster: 1 }
m31200| Wed Jun 13 22:34:04 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed
m31200| Wed Jun 13 22:34:04 [ReplicaSetMonitorWatcher] dev: lastError==0 won't report:DBClientBase::findN: transport error: tp2.10gen.cc:31102 ns: admin.$cmd query: { ismaster: 1 }
m31200| Wed Jun 13 22:34:05 got signal 15 (Terminated), will terminate after current cmd ends
m31200| Wed Jun 13 22:34:05 [interruptThread] now exiting
m31200| Wed Jun 13 22:34:05 dbexit:
m31200| Wed Jun 13 22:34:05 [interruptThread] shutdown: going to close listening sockets...
m31200| Wed Jun 13 22:34:05 [interruptThread] closing listening socket: 39
m31200| Wed Jun 13 22:34:05 [interruptThread] closing listening socket: 43
m31200| Wed Jun 13 22:34:05 [interruptThread] closing listening socket: 44
m31200| Wed Jun 13 22:34:05 [interruptThread] removing socket file: /tmp/mongodb-31200.sock
m31200| Wed Jun 13 22:34:05 [interruptThread] shutdown: going to flush diaglog...
m31200| Wed Jun 13 22:34:05 [interruptThread] shutdown: going to close sockets...
m31200| Wed Jun 13 22:34:05 [interruptThread] shutdown: waiting for fs preallocator...
m31200| Wed Jun 13 22:34:05 [interruptThread] shutdown: closing all files...
m31200| Wed Jun 13 22:34:05 [conn1] end connection 184.173.149.242:41164 (8 connections now open)
m31201| Wed Jun 13 22:34:05 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: tp2.10gen.cc:31200
m31202| Wed Jun 13 22:34:05 [conn14] end connection 184.173.149.242:56960 (4 connections now open)
m31202| Wed Jun 13 22:34:05 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: tp2.10gen.cc:31200
m31201| Wed Jun 13 22:34:05 [conn17] end connection 184.173.149.242:38369 (4 connections now open)
m31200| Wed Jun 13 22:34:05 [interruptThread] closeAllFiles() finished
m31200| Wed Jun 13 22:34:05 [interruptThread] shutdown: removing fs lock...
m31200| Wed Jun 13 22:34:05 dbexit: really exiting now
m31201| Wed Jun 13 22:34:06 got signal 15 (Terminated), will terminate after current cmd ends
m31201| Wed Jun 13 22:34:06 [interruptThread] now exiting
m31201| Wed Jun 13 22:34:06 dbexit:
m31201| Wed Jun 13 22:34:06 [interruptThread] shutdown: going to close listening sockets...
m31201| Wed Jun 13 22:34:06 [interruptThread] closing listening socket: 42
m31201| Wed Jun 13 22:34:06 [interruptThread] closing listening socket: 43
m31201| Wed Jun 13 22:34:06 [interruptThread] closing listening socket: 45
m31201| Wed Jun 13 22:34:06 [interruptThread] removing socket file: /tmp/mongodb-31201.sock
m31201| Wed Jun 13 22:34:06 [interruptThread] shutdown: going to flush diaglog...
m31201| Wed Jun 13 22:34:06 [interruptThread] shutdown: going to close sockets...
m31201| Wed Jun 13 22:34:06 [interruptThread] shutdown: waiting for fs preallocator...
m31201| Wed Jun 13 22:34:06 [interruptThread] shutdown: closing all files...
m31202| Wed Jun 13 22:34:06 [conn15] end connection 184.173.149.242:56962 (3 connections now open)
m31201| Wed Jun 13 22:34:06 [conn1] end connection 184.173.149.242:59488 (3 connections now open)
m31201| Wed Jun 13 22:34:06 [interruptThread] closeAllFiles() finished
m31201| Wed Jun 13 22:34:06 [interruptThread] shutdown: removing fs lock...
m31201| Wed Jun 13 22:34:06 dbexit: really exiting now
m31202| Wed Jun 13 22:34:06 [rsHealthPoll] DBClientCursor::init call() failed
m31202| Wed Jun 13 22:34:06 [rsHealthPoll] replSet info tp2.10gen.cc:31200 is down (or slow to respond): DBClientBase::findN: transport error: tp2.10gen.cc:31200 ns: admin.$cmd query: { replSetHeartbeat: "d2", v: 1, pv: 1, checkEmpty: false, from: "tp2.10gen.cc:31202" }
m31202| Wed Jun 13 22:34:06 [rsHealthPoll] replSet member tp2.10gen.cc:31200 is now in state DOWN
m31202| Wed Jun 13 22:34:06 [MultiCommandJob] DBClientCursor::init call() failed
m31202| Wed Jun 13 22:34:06 [MultiCommandJob] dev: lastError==0 won't report:DBClientBase::findN: transport error: tp2.10gen.cc:31201 ns: admin.$cmd query: { replSetFresh: 1, set: "d2", opTime: new Date(5753730754580316161), who: "tp2.10gen.cc:31202", cfgver: 1, id: 2 }
m31202| Wed Jun 13 22:34:06 [MultiCommandJob] dev caught dbexception on multiCommand tp2.10gen.cc:31201
m31202| Wed Jun 13 22:34:06 [rsMgr] replSet freshest returns {}
m31202| Wed Jun 13 22:34:06 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes
m31202| Wed Jun 13 22:34:06 [rsHealthPoll] replSet info tp2.10gen.cc:31201 is down (or slow to respond): socket exception
m31202| Wed Jun 13 22:34:06 [rsHealthPoll] replSet member tp2.10gen.cc:31201 is now in state DOWN
m31202| Wed Jun 13 22:34:06 [rsMgr] replSet can't see a majority, will not try to elect self
m31202| Wed Jun 13 22:34:07 got signal 15 (Terminated), will terminate after current cmd ends
m31202| Wed Jun 13 22:34:07 [interruptThread] now exiting
m31202| Wed Jun 13 22:34:07 dbexit:
m31202| Wed Jun 13 22:34:07 [interruptThread] shutdown: going to close listening sockets...
m31202| Wed Jun 13 22:34:07 [interruptThread] closing listening socket: 45
m31202| Wed Jun 13 22:34:07 [interruptThread] closing listening socket: 46
m31202| Wed Jun 13 22:34:07 [interruptThread] closing listening socket: 49
m31202| Wed Jun 13 22:34:07 [interruptThread] removing socket file: /tmp/mongodb-31202.sock
m31202| Wed Jun 13 22:34:07 [interruptThread] shutdown: going to flush diaglog...
m31202| Wed Jun 13 22:34:07 [interruptThread] shutdown: going to close sockets...
m31202| Wed Jun 13 22:34:07 [interruptThread] shutdown: waiting for fs preallocator...
m31202| Wed Jun 13 22:34:07 [interruptThread] shutdown: closing all files...
m31202| Wed Jun 13 22:34:07 [conn1] end connection 184.173.149.242:42934 (2 connecti 146779.442072ms
Wed Jun 13 22:34:08 [initandlisten] connection accepted from 127.0.0.1:39934 #8 (7 connections now open)
*******************************************
Test : auth_add_shard.js ...
Command : /home/yellow/buildslave/Linux_32bit_debug/mongo/mongo --port 27999 --nodb /home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/auth_add_shard.js --eval TestData = new Object();TestData.testPath = "/home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/auth_add_shard.js";TestData.testFile = "auth_add_shard.js";TestData.testName = "auth_add_shard";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
Date : Wed Jun 13 22:34:08 2012
MongoDB shell version: 2.1.2-pre-
null
Resetting db path '/data/db/auth_add_shard10'
Wed Jun 13 22:34:08 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 30000 --dbpath /data/db/auth_add_shard10 --keyFile jstests/libs/key1
m30000| Wed Jun 13 22:34:08
m30000| Wed Jun 13 22:34:08 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m30000| Wed Jun 13 22:34:08
m30000| Wed Jun 13 22:34:08 [initandlisten] MongoDB starting : pid=10875 port=30000 dbpath=/data/db/auth_add_shard10 32-bit host=tp2.10gen.cc
m30000| Wed Jun 13 22:34:08 [initandlisten] _DEBUG build (which is slower)
m30000| Wed Jun 13 22:34:08 [initandlisten]
m30000| Wed Jun 13 22:34:08 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m30000| Wed Jun 13 22:34:08 [initandlisten] ** Not recommended for production.
m30000| Wed Jun 13 22:34:08 [initandlisten]
m30000| Wed Jun 13 22:34:08 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m30000| Wed Jun 13 22:34:08 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m30000| Wed Jun 13 22:34:08 [initandlisten] ** with --journal, the limit is lower
m30000| Wed Jun 13 22:34:08 [initandlisten]
m30000| Wed Jun 13 22:34:08 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m30000| Wed Jun 13 22:34:08 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30000| Wed Jun 13 22:34:08 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30000| Wed Jun 13 22:34:08 [initandlisten] options: { dbpath: "/data/db/auth_add_shard10", keyFile: "jstests/libs/key1", port: 30000 }
m30000| Wed Jun 13 22:34:08 [initandlisten] opening db: local
m30000| Wed Jun 13 22:34:08 [initandlisten] opening db: admin
m30000| Wed Jun 13 22:34:08 [initandlisten] waiting for connections on port 30000
m30000| Wed Jun 13 22:34:08 [websvr] admin web console waiting for connections on port 31000
m30000| Wed Jun 13 22:34:08 [initandlisten] connection accepted from 127.0.0.1:42191 #1 (1 connection now open)
m30000| Wed Jun 13 22:34:08 [conn1] note: no users configured in admin.system.users, allowing localhost access
"localhost:30000"
m30000| Wed Jun 13 22:34:08 [initandlisten] connection accepted from 127.0.0.1:42192 #2 (2 connections now open)
ShardingTest auth_add_shard1 :
{
"config" : "localhost:30000",
"shards" : [
connection to localhost:30000
]
}
m30000| Wed Jun 13 22:34:08 [conn2] opening db: config
m30000| Wed Jun 13 22:34:08 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/config.ns, filling with zeroes...
m30000| Wed Jun 13 22:34:08 [FileAllocator] creating directory /data/db/auth_add_shard10/_tmp
Wed Jun 13 22:34:08 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos --port 30999 --configdb localhost:30000 -v --keyFile jstests/libs/key1
m30999| Wed Jun 13 22:34:08 security key: foopdedoop
m30999| Wed Jun 13 22:34:08 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
m30999| Wed Jun 13 22:34:08 [mongosMain] MongoS version 2.1.2-pre- starting: pid=10890 port=30999 32-bit host=tp2.10gen.cc (--help for usage)
m30999| Wed Jun 13 22:34:08 [mongosMain] _DEBUG build
m30999| Wed Jun 13 22:34:08 [mongosMain] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m30999| Wed Jun 13 22:34:08 [mongosMain] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m30999| Wed Jun 13 22:34:08 [mongosMain] options: { configdb: "localhost:30000", keyFile: "jstests/libs/key1", port: 30999, verbose: true }
m30999| Wed Jun 13 22:34:08 [mongosMain] config string : localhost:30000
m30999| Wed Jun 13 22:34:08 [mongosMain] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:34:08 BackgroundJob starting: ConnectBG
m30000| Wed Jun 13 22:34:08 [initandlisten] connection accepted from 127.0.0.1:42194 #3 (3 connections now open)
m30999| Wed Jun 13 22:34:08 [mongosMain] connected connection!
m30000| Wed Jun 13 22:34:08 [conn3] authenticate db: local { authenticate: 1, nonce: "266835011c3e9f2a", user: "__system", key: "f73c89731e5f16434242c7f5b224e69f" }
m30000| Wed Jun 13 22:34:08 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/config.ns, size: 16MB, took 0.069 secs
m30000| Wed Jun 13 22:34:08 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/config.0, filling with zeroes...
m30000| Wed Jun 13 22:34:08 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/config.0, size: 16MB, took 0.04 secs
m30000| Wed Jun 13 22:34:08 [conn2] datafileheader::init initializing /data/db/auth_add_shard10/config.0 n:0
m30000| Wed Jun 13 22:34:08 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/config.1, filling with zeroes...
m30000| Wed Jun 13 22:34:08 [conn2] build index config.settings { _id: 1 }
m30000| Wed Jun 13 22:34:08 [conn2] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:34:08 [conn2] insert config.settings keyUpdates:0 locks(micros) r:79 w:120529 120ms
m30999| Wed Jun 13 22:34:08 BackgroundJob starting: CheckConfigServers
m30999| Wed Jun 13 22:34:08 [CheckConfigServers] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:34:08 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:34:08 [CheckConfigServers] connected connection!
m30000| Wed Jun 13 22:34:08 [initandlisten] connection accepted from 127.0.0.1:42195 #4 (4 connections now open)
m30999| Wed Jun 13 22:34:08 [mongosMain] creating new connection to:localhost:30000
m30000| Wed Jun 13 22:34:08 [conn4] authenticate db: local { authenticate: 1, nonce: "23d55b40b755b3ff", user: "__system", key: "66c7b1862e147015846aeb3db8eeb874" }
m30999| Wed Jun 13 22:34:08 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:34:08 [mongosMain] connected connection!
m30000| Wed Jun 13 22:34:08 [initandlisten] connection accepted from 127.0.0.1:42196 #5 (5 connections now open)
m30000| Wed Jun 13 22:34:08 [conn5] authenticate db: local { authenticate: 1, nonce: "621f9ff8208a874d", user: "__system", key: "199fefe350194bc454f5a7e526fe91d5" }
m30000| Wed Jun 13 22:34:08 [conn5] build index config.version { _id: 1 }
m30000| Wed Jun 13 22:34:08 [conn5] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:34:08 [mongosMain] MaxChunkSize: 50
m30999| Wed Jun 13 22:34:08 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819
m30999| Wed Jun 13 22:34:08 [websvr] fd limit hard:1024 soft:1024 max conn: 819
m30999| Wed Jun 13 22:34:08 [websvr] admin web console waiting for connections on port 31999
m30999| Wed Jun 13 22:34:08 [mongosMain] waiting for connections on port 30999
m30000| Wed Jun 13 22:34:08 [conn4] build index config.chunks { _id: 1 }
m30999| Wed Jun 13 22:34:08 BackgroundJob starting: Balancer
m30999| Wed Jun 13 22:34:08 [Balancer] about to contact config servers and shards
m30999| Wed Jun 13 22:34:08 BackgroundJob starting: PeriodicTask::Runner
m30999| Wed Jun 13 22:34:08 BackgroundJob starting: cursorTimeout
m30000| Wed Jun 13 22:34:08 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:34:08 [conn4] info: creating collection config.chunks on add index
m30000| Wed Jun 13 22:34:08 [conn4] build index config.chunks { ns: 1, min: 1 }
m30000| Wed Jun 13 22:34:08 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:34:08 [Balancer] config servers and shards contacted successfully
m30999| Wed Jun 13 22:34:08 [Balancer] balancer id: tp2.10gen.cc:30999 started at Jun 13 22:34:08
m30999| Wed Jun 13 22:34:08 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:34:08 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
m30999| Wed Jun 13 22:34:08 [Balancer] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:34:08 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:34:08 [Balancer] connected connection!
m30000| Wed Jun 13 22:34:08 [initandlisten] connection accepted from 127.0.0.1:42197 #6 (6 connections now open)
m30000| Wed Jun 13 22:34:08 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:34:08 [conn6] authenticate db: local { authenticate: 1, nonce: "74244651dedbff4b", user: "__system", key: "30752220f4a3445f4f5b0aef7db494de" }
m30000| Wed Jun 13 22:34:08 [conn5] build index config.mongos { _id: 1 }
m30000| Wed Jun 13 22:34:08 [conn5] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:34:08 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
m30000| Wed Jun 13 22:34:08 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:34:08 [conn4] build index config.shards { _id: 1 }
m30000| Wed Jun 13 22:34:08 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:34:08 [conn4] info: creating collection config.shards on add index
m30000| Wed Jun 13 22:34:08 [conn4] build index config.shards { host: 1 }
m30000| Wed Jun 13 22:34:08 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:34:08 [Balancer] Refreshing MaxChunkSize: 50
m30999| Wed Jun 13 22:34:08 [LockPinger] creating distributed lock ping thread for localhost:30000 and process tp2.10gen.cc:30999:1339644848:1804289383 (sleeping for 30000ms)
m30999| Wed Jun 13 22:34:08 [Balancer] inserting initial doc in config.locks for lock balancer
m30999| Wed Jun 13 22:34:08 [Balancer] about to acquire distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383:
m30999| { "state" : 1,
m30999| "who" : "tp2.10gen.cc:30999:1339644848:1804289383:Balancer:846930886",
m30999| "process" : "tp2.10gen.cc:30999:1339644848:1804289383",
m30999| "when" : { "$date" : "Wed Jun 13 22:34:08 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd95bb063e4ab8b6eed9dcf" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0 }
m30000| Wed Jun 13 22:34:08 [conn4] build index config.lockpings { _id: 1 }
m30000| Wed Jun 13 22:34:08 [conn4] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:34:08 [conn6] build index config.locks { _id: 1 }
m30000| Wed Jun 13 22:34:08 [conn6] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:34:08 [LockPinger] cluster localhost:30000 pinged successfully at Wed Jun 13 22:34:08 2012 by distributed lock pinger 'localhost:30000/tp2.10gen.cc:30999:1339644848:1804289383', sleeping for 30000ms
m30000| Wed Jun 13 22:34:08 [conn4] build index config.lockpings { ping: 1 }
m30000| Wed Jun 13 22:34:08 [conn4] build index done. scanned 1 total records. 0 secs
m30999| Wed Jun 13 22:34:08 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' acquired, ts : 4fd95bb063e4ab8b6eed9dcf
m30999| Wed Jun 13 22:34:08 [Balancer] *** start balancing round
m30999| Wed Jun 13 22:34:08 [Balancer] no collections to balance
m30999| Wed Jun 13 22:34:08 [Balancer] no need to move any chunk
m30999| Wed Jun 13 22:34:08 [Balancer] *** end of balancing round
m30999| Wed Jun 13 22:34:08 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' unlocked.
m30000| Wed Jun 13 22:34:08 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/config.1, size: 32MB, took 0.072 secs
m30999| Wed Jun 13 22:34:08 [mongosMain] connection accepted from 127.0.0.1:53886 #1 (1 connection now open)
ShardingTest undefined going to add shard : localhost:30000
m30999| Wed Jun 13 22:34:08 [conn] couldn't find database [admin] in config db
m30000| Wed Jun 13 22:34:08 [conn4] build index config.databases { _id: 1 }
m30000| Wed Jun 13 22:34:08 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:34:08 [conn] put [admin] on: config:localhost:30000
m30999| Wed Jun 13 22:34:08 [conn] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:34:08 BackgroundJob starting: ConnectBG
m30000| Wed Jun 13 22:34:08 [initandlisten] connection accepted from 127.0.0.1:42199 #7 (7 connections now open)
m30999| Wed Jun 13 22:34:08 [conn] connected connection!
m30000| Wed Jun 13 22:34:08 [conn7] authenticate db: local { authenticate: 1, nonce: "74440aa2cf89476b", user: "__system", key: "46af09e4ecf400a4bc18fcaae319565f" }
m30999| Wed Jun 13 22:34:08 [conn] creating WriteBackListener for: localhost:30000 serverID: 4fd95bb063e4ab8b6eed9dce
m30999| Wed Jun 13 22:34:08 [conn] initializing shard connection to localhost:30000
m30999| Wed Jun 13 22:34:08 BackgroundJob starting: WriteBackListener-localhost:30000
m30999| Wed Jun 13 22:34:08 [WriteBackListener-localhost:30000] localhost:30000 is not a shard node
m30999| Wed Jun 13 22:34:08 [conn] note: no users configured in admin.system.users, allowing localhost access
m30999| Wed Jun 13 22:34:08 [conn] going to add shard: { _id: "shard0000", host: "localhost:30000" }
{ "shardAdded" : "shard0000", "ok" : 1 }
Waiting for active hosts...
Waiting for the balancer lock...
Waiting again for active hosts after balancer is off...
1 shard system setup
adding user
{
"user" : "foo",
"readOnly" : false,
"pwd" : "3563025c1e89c7ad43fb63fcbcf1c3c6",
"_id" : ObjectId("4fd95bb0c255e8188c9b4809")
}
m30000| Wed Jun 13 22:34:08 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/admin.ns, filling with zeroes...
m30000| Wed Jun 13 22:34:08 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/admin.ns, size: 16MB, took 0.128 secs
m30000| Wed Jun 13 22:34:08 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/admin.0, filling with zeroes...
m30000| Wed Jun 13 22:34:09 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/admin.0, size: 16MB, took 0.036 secs
m30000| Wed Jun 13 22:34:09 [conn7] datafileheader::init initializing /data/db/auth_add_shard10/admin.0 n:0
m30000| Wed Jun 13 22:34:09 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/admin.1, filling with zeroes...
m30000| Wed Jun 13 22:34:09 [conn7] build index admin.system.users { _id: 1 }
m30000| Wed Jun 13 22:34:09 [conn7] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:34:09 [conn7] insert admin.system.users keyUpdates:0 locks(micros) W:119 r:1861 w:173579 173ms
m30999| Wed Jun 13 22:34:09 [conn] authenticate db: admin { authenticate: 1, nonce: "fcd9a15beacdce8a", user: "foo", key: "fbc713d83a76966712a79661a561698c" }
1
Resetting db path '/data/db/mongod-27000'
Wed Jun 13 22:34:09 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --port 27000 --dbpath /data/db/mongod-27000
m27000| Wed Jun 13 22:34:09
m27000| Wed Jun 13 22:34:09 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m27000| Wed Jun 13 22:34:09
m27000| Wed Jun 13 22:34:09 [initandlisten] MongoDB starting : pid=10912 port=27000 dbpath=/data/db/mongod-27000 32-bit host=tp2.10gen.cc
m27000| Wed Jun 13 22:34:09 [initandlisten] _DEBUG build (which is slower)
m27000| Wed Jun 13 22:34:09 [initandlisten]
m27000| Wed Jun 13 22:34:09 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m27000| Wed Jun 13 22:34:09 [initandlisten] ** Not recommended for production.
m27000| Wed Jun 13 22:34:09 [initandlisten]
m27000| Wed Jun 13 22:34:09 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m27000| Wed Jun 13 22:34:09 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m27000| Wed Jun 13 22:34:09 [initandlisten] ** with --journal, the limit is lower
m27000| Wed Jun 13 22:34:09 [initandlisten]
m27000| Wed Jun 13 22:34:09 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m27000| Wed Jun 13 22:34:09 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m27000| Wed Jun 13 22:34:09 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m27000| Wed Jun 13 22:34:09 [initandlisten] options: { dbpath: "/data/db/mongod-27000", port: 27000 }
m30000| Wed Jun 13 22:34:09 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/admin.1, size: 32MB, took 0.395 secs
m27000| Wed Jun 13 22:34:09 [initandlisten] opening db: local
m27000| Wed Jun 13 22:34:09 [initandlisten] waiting for connections on port 27000
m27000| Wed Jun 13 22:34:09 [websvr] admin web console waiting for connections on port 28000
m27000| Wed Jun 13 22:34:09 [initandlisten] connection accepted from 127.0.0.1:42551 #1 (1 connection now open)
connection to localhost:27000
m30999| Wed Jun 13 22:34:09 [conn] creating new connection to:localhost:27000
m30999| Wed Jun 13 22:34:09 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:34:09 [conn] connected connection!
m27000| Wed Jun 13 22:34:09 [initandlisten] connection accepted from 127.0.0.1:42552 #2 (2 connections now open)
m27000| Wed Jun 13 22:34:09 [conn2] authenticate db: local { authenticate: 1, nonce: "e17365ef560d35da", user: "__system", key: "c63d699fd02b53df589e68cae5db177c" }
m30999| Wed Jun 13 22:34:09 [conn] User Assertion: 15847:can't authenticate to shard server
m30999| Wed Jun 13 22:34:09 [conn] addshard request { addShard: "localhost:27000" } failed: couldn't connect to new shard can't authenticate to shard server
m27000| Wed Jun 13 22:34:09 [conn2] end connection 127.0.0.1:42552 (1 connection now open)
{
"ok" : 0,
"errmsg" : "couldn't connect to new shard can't authenticate to shard server"
}
m27000| Wed Jun 13 22:34:09 got signal 15 (Terminated), will terminate after current cmd ends
m27000| Wed Jun 13 22:34:09 [interruptThread] now exiting
m27000| Wed Jun 13 22:34:09 dbexit:
m27000| Wed Jun 13 22:34:09 [interruptThread] shutdown: going to close listening sockets...
m27000| Wed Jun 13 22:34:09 [interruptThread] closing listening socket: 25
m27000| Wed Jun 13 22:34:09 [interruptThread] closing listening socket: 26
m27000| Wed Jun 13 22:34:09 [interruptThread] closing listening socket: 27
m27000| Wed Jun 13 22:34:09 [interruptThread] removing socket file: /tmp/mongodb-27000.sock
m27000| Wed Jun 13 22:34:09 [interruptThread] shutdown: going to flush diaglog...
m27000| Wed Jun 13 22:34:09 [interruptThread] shutdown: going to close sockets...
m27000| Wed Jun 13 22:34:09 [interruptThread] shutdown: waiting for fs preallocator...
m27000| Wed Jun 13 22:34:09 [interruptThread] shutdown: closing all files...
m27000| Wed Jun 13 22:34:09 [interruptThread] closeAllFiles() finished
m27000| Wed Jun 13 22:34:09 [interruptThread] shutdown: removing fs lock...
m27000| Wed Jun 13 22:34:09 dbexit: really exiting now
Wed Jun 13 22:34:10 shell: stopped mongo program on port 27000
Resetting db path '/data/db/mongod-27000'
Wed Jun 13 22:34:10 shell: started program /home/yellow/buildslave/Linux_32bit_debug/mongo/mongod --keyFile jstests/libs/key1 --port 27000 --dbpath /data/db/mongod-27000
m27000| Wed Jun 13 22:34:10
m27000| Wed Jun 13 22:34:10 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
m27000| Wed Jun 13 22:34:10
m27000| Wed Jun 13 22:34:10 [initandlisten] MongoDB starting : pid=10928 port=27000 dbpath=/data/db/mongod-27000 32-bit host=tp2.10gen.cc
m27000| Wed Jun 13 22:34:10 [initandlisten] _DEBUG build (which is slower)
m27000| Wed Jun 13 22:34:10 [initandlisten]
m27000| Wed Jun 13 22:34:10 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
m27000| Wed Jun 13 22:34:10 [initandlisten] ** Not recommended for production.
m27000| Wed Jun 13 22:34:10 [initandlisten]
m27000| Wed Jun 13 22:34:10 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
m27000| Wed Jun 13 22:34:10 [initandlisten] ** see http://blog.mongodb.org/post/137788967/32-bit-limitations
m27000| Wed Jun 13 22:34:10 [initandlisten] ** with --journal, the limit is lower
m27000| Wed Jun 13 22:34:10 [initandlisten]
m27000| Wed Jun 13 22:34:10 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
m27000| Wed Jun 13 22:34:10 [initandlisten] git version: 4d787f2622a2d99b7e85d8768546d9ff428bba18
m27000| Wed Jun 13 22:34:10 [initandlisten] build info: Linux tp2.10gen.cc 2.6.34.9-69.fc13.i686.PAE #1 SMP Tue May 3 09:13:56 UTC 2011 i686 BOOST_LIB_VERSION=1_49
m27000| Wed Jun 13 22:34:10 [initandlisten] options: { dbpath: "/data/db/mongod-27000", keyFile: "jstests/libs/key1", port: 27000 }
m27000| Wed Jun 13 22:34:10 [initandlisten] opening db: local
m27000| Wed Jun 13 22:34:10 [initandlisten] opening db: admin
m27000| Wed Jun 13 22:34:10 [initandlisten] waiting for connections on port 27000
m27000| Wed Jun 13 22:34:10 [websvr] admin web console waiting for connections on port 28000
m27000| Wed Jun 13 22:34:10 [initandlisten] connection accepted from 127.0.0.1:42554 #1 (1 connection now open)
m27000| Wed Jun 13 22:34:10 [conn1] note: no users configured in admin.system.users, allowing localhost access
m30999| Wed Jun 13 22:34:10 [conn] creating new connection to:localhost:27000
m30999| Wed Jun 13 22:34:10 BackgroundJob starting: ConnectBG
m27000| Wed Jun 13 22:34:10 [initandlisten] connection accepted from 127.0.0.1:42555 #2 (2 connections now open)
m30999| Wed Jun 13 22:34:10 [conn] connected connection!
m27000| Wed Jun 13 22:34:10 [conn2] authenticate db: local { authenticate: 1, nonce: "8c95d8b71ed5309f", user: "__system", key: "8206a5a910db0ca47d5e047d83b38be2" }
m30999| Wed Jun 13 22:34:10 [conn] going to add shard: { _id: "shard0001", host: "localhost:27000" }
{ "shardAdded" : "shard0001", "ok" : 1 }
m30999| Wed Jun 13 22:34:10 [conn] couldn't find database [foo] in config db
m30999| Wed Jun 13 22:34:10 [conn] best shard for new allocation is shard: shard0001:localhost:27000 mapped: 0 writeLock: 0
m30999| Wed Jun 13 22:34:10 [conn] put [foo] on: shard0001:localhost:27000
m30999| Wed Jun 13 22:34:10 [conn] enabling sharding on: foo
{ "ok" : 1 }
m30999| Wed Jun 13 22:34:10 [conn] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0001" }
m30999| Wed Jun 13 22:34:10 [conn] Moving foo primary from: shard0001:localhost:27000 to: shard0000:localhost:30000
m30999| Wed Jun 13 22:34:10 [conn] created new distributed lock for foo-movePrimary on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30999| Wed Jun 13 22:34:10 [conn] inserting initial doc in config.locks for lock foo-movePrimary
m30999| Wed Jun 13 22:34:10 [conn] about to acquire distributed lock 'foo-movePrimary/tp2.10gen.cc:30999:1339644848:1804289383:
m30999| { "state" : 1,
m30999| "who" : "tp2.10gen.cc:30999:1339644848:1804289383:conn:1681692777",
m30999| "process" : "tp2.10gen.cc:30999:1339644848:1804289383",
m30999| "when" : { "$date" : "Wed Jun 13 22:34:10 2012" },
m30999| "why" : "Moving primary shard of foo",
m30999| "ts" : { "$oid" : "4fd95bb263e4ab8b6eed9dd0" } }
m30999| { "_id" : "foo-movePrimary",
m30999| "state" : 0 }
m30999| Wed Jun 13 22:34:10 [conn] distributed lock 'foo-movePrimary/tp2.10gen.cc:30999:1339644848:1804289383' acquired, ts : 4fd95bb263e4ab8b6eed9dd0
m30000| Wed Jun 13 22:34:10 [conn5] opening db: foo
m27000| Wed Jun 13 22:34:10 [initandlisten] connection accepted from 127.0.0.1:42556 #3 (3 connections now open)
m27000| Wed Jun 13 22:34:10 [conn3] authenticate db: local { authenticate: 1, nonce: "51cf425d87428a8", user: "__system", key: "a4bf5002d64617359ed42b0d23dae15f" }
m27000| Wed Jun 13 22:34:10 [conn3] _DEBUG ReadContext db wasn't open, will try to open foo.system.namespaces
m27000| Wed Jun 13 22:34:10 [conn3] opening db: foo
m27000| Wed Jun 13 22:34:10 [conn3] end connection 127.0.0.1:42556 (2 connections now open)
m30999| Wed Jun 13 22:34:10 [conn] movePrimary dropping database on localhost:27000, no sharded collections in foo
m27000| Wed Jun 13 22:34:10 [conn2] dropDatabase foo
m30999| Wed Jun 13 22:34:10 [conn] distributed lock 'foo-movePrimary/tp2.10gen.cc:30999:1339644848:1804289383' unlocked.
{ "primary " : "shard0000:localhost:30000", "ok" : 1 }
m30999| Wed Jun 13 22:34:10 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30999| Wed Jun 13 22:34:10 [conn] CMD: shardcollection: { shardCollection: "foo.bar", key: { _id: 1.0 } }
m30999| Wed Jun 13 22:34:10 [conn] enable sharding on: foo.bar with shard key: { _id: 1.0 }
m30999| Wed Jun 13 22:34:10 [conn] going to create 1 chunk(s) for: foo.bar using new epoch 4fd95bb263e4ab8b6eed9dd1
m30000| Wed Jun 13 22:34:10 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/foo.ns, filling with zeroes...
m30999| Wed Jun 13 22:34:10 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 2 version: 1|0||4fd95bb263e4ab8b6eed9dd1 based on: (empty)
m30999| Wed Jun 13 22:34:10 [conn] DEV WARNING appendDate() called with a tiny (but nonzero) date
m30000| Wed Jun 13 22:34:10 [conn4] build index config.collections { _id: 1 }
m30000| Wed Jun 13 22:34:10 [conn4] build index done. scanned 0 total records. 0 secs
m30999| Wed Jun 13 22:34:10 [conn] creating new connection to:localhost:27000
m30999| Wed Jun 13 22:34:10 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:34:10 [conn] connected connection!
m27000| Wed Jun 13 22:34:10 [initandlisten] connection accepted from 127.0.0.1:42557 #4 (3 connections now open)
m27000| Wed Jun 13 22:34:10 [conn4] authenticate db: local { authenticate: 1, nonce: "f444e5979da392c6", user: "__system", key: "e06e76cb77c671811f1b05d9a0796b75" }
m30999| Wed Jun 13 22:34:10 [conn] creating WriteBackListener for: localhost:27000 serverID: 4fd95bb063e4ab8b6eed9dce
m30999| Wed Jun 13 22:34:10 [conn] initializing shard connection to localhost:27000
m30999| Wed Jun 13 22:34:10 BackgroundJob starting: WriteBackListener-localhost:27000
m30999| Wed Jun 13 22:34:10 [conn] resetting shard version of foo.bar on localhost:27000, version is zero
m30999| Wed Jun 13 22:34:10 [conn] setShardVersion shard0001 localhost:27000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('4fd95bb063e4ab8b6eed9dce'), shard: "shard0001", shardHost: "localhost:27000" } 0xb30060d0
m30999| Wed Jun 13 22:34:10 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30999| Wed Jun 13 22:34:10 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), serverID: ObjectId('4fd95bb063e4ab8b6eed9dce'), shard: "shard0000", shardHost: "localhost:30000" } 0xb3001540
m30000| Wed Jun 13 22:34:10 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/foo.ns, size: 16MB, took 0.03 secs
m30000| Wed Jun 13 22:34:10 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/foo.0, filling with zeroes...
m30000| Wed Jun 13 22:34:10 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/foo.0, size: 16MB, took 0.044 secs
m30000| Wed Jun 13 22:34:10 [conn6] datafileheader::init initializing /data/db/auth_add_shard10/foo.0 n:0
m30000| Wed Jun 13 22:34:10 [FileAllocator] allocating new datafile /data/db/auth_add_shard10/foo.1, filling with zeroes...
m30000| Wed Jun 13 22:34:10 [conn6] build index foo.bar { _id: 1 }
m30000| Wed Jun 13 22:34:10 [conn6] build index done. scanned 0 total records. 0 secs
m30000| Wed Jun 13 22:34:10 [conn6] info: creating collection foo.bar on add index
m30999| Wed Jun 13 22:34:10 [conn] setShardVersion failed!
m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "foo.bar", need_authoritative: true, errmsg: "first time for collection 'foo.bar'", ok: 0.0 }
m30999| Wed Jun 13 22:34:10 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), serverID: ObjectId('4fd95bb063e4ab8b6eed9dce'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0xb3001540
m30000| Wed Jun 13 22:34:10 [conn7] no current chunk manager found for this shard, will initialize
m30000| Wed Jun 13 22:34:11 [conn7] query config.chunks query: { $or: [ { ns: "foo.bar", lastmod: { $gte: Timestamp 0|0 } }, { ns: "foo.bar", shard: "shard0000", lastmod: { $gt: Timestamp 0|0 } } ] } ntoreturn:0 ntoskip:0 nscanned:1 keyUpdates:0 locks(micros) r:436381 nreturned:1 reslen:163 436ms
m30000| Wed Jun 13 22:34:11 [conn7] setShardVersion - relocking slow: 436
m30000| Wed Jun 13 22:34:11 [conn7] command admin.$cmd command: { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), serverID: ObjectId('4fd95bb063e4ab8b6eed9dce'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:166 r:1923 w:173579 reslen:86 436ms
m30999| Wed Jun 13 22:34:11 [conn] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
m30000| Wed Jun 13 22:34:11 [FileAllocator] done allocating datafile /data/db/auth_add_shard10/foo.1, size: 32MB, took 0.466 secs
m30999| Wed Jun 13 22:34:11 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey } dataWritten: 6853758 splitThreshold: 921
m30999| Wed Jun 13 22:34:11 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Wed Jun 13 22:34:11 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { _id: MinKey } max: { _id: MaxKey }
m30000| Wed Jun 13 22:34:11 [initandlisten] connection accepted from 127.0.0.1:42208 #8 (8 connections now open)
m30000| Wed Jun 13 22:34:11 [conn8] authenticate db: local { authenticate: 1, nonce: "cc50df8c451e5d4b", user: "__system", key: "de535b808bcafd37be3e8c0e131c2c1a" }
m30000| Wed Jun 13 22:34:11 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], shardId: "foo.bar-_id_MinKey", configdb: "localhost:30000" }
m30000| Wed Jun 13 22:34:11 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:34:11 [LockPinger] creating distributed lock ping thread for localhost:30000 and process tp2.10gen.cc:30000:1339644851:1096328315 (sleeping for 30000ms)
m30000| Wed Jun 13 22:34:11 [initandlisten] connection accepted from 127.0.0.1:42209 #9 (9 connections now open)
m30000| Wed Jun 13 22:34:11 [conn9] authenticate db: local { authenticate: 1, nonce: "a3dfd9e3e0a574bd", user: "__system", key: "c062ce6b13e913587a7989f33b856ddc" }
m30000| Wed Jun 13 22:34:11 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644851:1096328315' acquired, ts : 4fd95bb3d2fb235d7519d8d9
m30000| Wed Jun 13 22:34:11 [conn5] splitChunk accepted at version 1|0||4fd95bb263e4ab8b6eed9dd1
m30000| Wed Jun 13 22:34:11 [conn8] info PageFaultRetryableSection will not yield, already locked upon reaching
m30000| Wed Jun 13 22:34:11 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:34:11-0", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:42196", time: new Date(1339644851394), what: "split", ns: "foo.bar", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1') } } }
m30000| Wed Jun 13 22:34:11 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644851:1096328315' unlocked.
m30999| Wed Jun 13 22:34:11 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 3 version: 1|2||4fd95bb263e4ab8b6eed9dd1 based on: 1|0||4fd95bb263e4ab8b6eed9dd1
{ "ok" : 1 }
m30999| Wed Jun 13 22:34:11 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), serverID: ObjectId('4fd95bb063e4ab8b6eed9dce'), shard: "shard0000", shardHost: "localhost:30000" } 0xb3001540
m30999| Wed Jun 13 22:34:11 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ok: 1.0 }
m30999| Wed Jun 13 22:34:11 [conn] about to initiate autosplit: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey } dataWritten: 8083694 splitThreshold: 471859
m30999| Wed Jun 13 22:34:11 [conn] chunk not full enough to trigger auto-split no split entry
m30999| Wed Jun 13 22:34:11 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { _id: 0.0 } max: { _id: MaxKey }
m30000| Wed Jun 13 22:34:11 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "foo.bar-_id_0.0", configdb: "localhost:30000" }
m30000| Wed Jun 13 22:34:11 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:34:11 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644851:1096328315' acquired, ts : 4fd95bb3d2fb235d7519d8da
m30000| Wed Jun 13 22:34:11 [conn5] splitChunk accepted at version 1|2||4fd95bb263e4ab8b6eed9dd1
m30000| Wed Jun 13 22:34:11 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:34:11-1", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:42196", time: new Date(1339644851427), what: "split", ns: "foo.bar", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 1.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1') } } }
m30000| Wed Jun 13 22:34:11 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644851:1096328315' unlocked.
m30999| Wed Jun 13 22:34:11 [conn] ChunkManager: time to load chunks for foo.bar: 0ms sequenceNumber: 4 version: 1|4||4fd95bb263e4ab8b6eed9dd1 based on: 1|2||4fd95bb263e4ab8b6eed9dd1
{ "ok" : 1 }
m30999| Wed Jun 13 22:34:11 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), serverID: ObjectId('4fd95bb063e4ab8b6eed9dce'), shard: "shard0000", shardHost: "localhost:30000" } 0xb3001540
m30999| Wed Jun 13 22:34:11 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ok: 1.0 }
m30999| Wed Jun 13 22:34:11 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|4||000000000000000000000000 min: { _id: 1.0 } max: { _id: MaxKey }
m30000| Wed Jun 13 22:34:11 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 2.0 } ], shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" }
m30000| Wed Jun 13 22:34:11 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:34:11 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644851:1096328315' acquired, ts : 4fd95bb3d2fb235d7519d8db
m30000| Wed Jun 13 22:34:11 [conn5] splitChunk accepted at version 1|4||4fd95bb263e4ab8b6eed9dd1
m30000| Wed Jun 13 22:34:11 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:34:11-2", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:42196", time: new Date(1339644851436), what: "split", ns: "foo.bar", details: { before: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1.0 }, max: { _id: 2.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1') }, right: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1') } } }
m30000| Wed Jun 13 22:34:11 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644851:1096328315' unlocked.
m30999| Wed Jun 13 22:34:11 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 5 version: 1|6||4fd95bb263e4ab8b6eed9dd1 based on: 1|4||4fd95bb263e4ab8b6eed9dd1
{ "ok" : 1 }
m30999| Wed Jun 13 22:34:11 [conn] setShardVersion shard0000 localhost:30000 foo.bar { setShardVersion: "foo.bar", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), serverID: ObjectId('4fd95bb063e4ab8b6eed9dce'), shard: "shard0000", shardHost: "localhost:30000" } 0xb3001540
m30999| Wed Jun 13 22:34:11 [conn] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ok: 1.0 }
m30999| Wed Jun 13 22:34:11 [conn] splitting: foo.bar shard: ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|6||000000000000000000000000 min: { _id: 2.0 } max: { _id: MaxKey }
m30000| Wed Jun 13 22:34:11 [conn5] received splitChunk request: { splitChunk: "foo.bar", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 3.0 } ], shardId: "foo.bar-_id_2.0", configdb: "localhost:30000" }
m30000| Wed Jun 13 22:34:11 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:34:11 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644851:1096328315' acquired, ts : 4fd95bb3d2fb235d7519d8dc
m30000| Wed Jun 13 22:34:11 [conn5] splitChunk accepted at version 1|6||4fd95bb263e4ab8b6eed9dd1
m30000| Wed Jun 13 22:34:11 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:34:11-3", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:42196", time: new Date(1339644851466), what: "split", ns: "foo.bar", details: { before: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2.0 }, max: { _id: 3.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1') }, right: { min: { _id: 3.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1') } } }
m30000| Wed Jun 13 22:34:11 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644851:1096328315' unlocked.
m30999| Wed Jun 13 22:34:11 [conn] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 6 version: 1|8||4fd95bb263e4ab8b6eed9dd1 based on: 1|6||4fd95bb263e4ab8b6eed9dd1
{ "ok" : 1 }
m30999| Wed Jun 13 22:34:11 [conn] CMD: movechunk: { moveChunk: "foo.bar", find: { _id: 1.0 }, to: "shard0001" }
m30999| Wed Jun 13 22:34:11 [conn] moving chunk ns: foo.bar moving ( ns:foo.bar at: shard0000:localhost:30000 lastmod: 1|5||000000000000000000000000 min: { _id: 1.0 } max: { _id: 2.0 }) shard0000:localhost:30000 -> shard0001:localhost:27000
m30000| Wed Jun 13 22:34:11 [conn5] received moveChunk request: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:27000", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" }
m30000| Wed Jun 13 22:34:11 [conn5] created new distributed lock for foo.bar on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
m30000| Wed Jun 13 22:34:11 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644851:1096328315' acquired, ts : 4fd95bb3d2fb235d7519d8dd
m30000| Wed Jun 13 22:34:11 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:34:11-4", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:42196", time: new Date(1339644851473), what: "moveChunk.start", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0000", to: "shard0001" } }
m30000| Wed Jun 13 22:34:11 [conn5] moveChunk request accepted at version 1|8||4fd95bb263e4ab8b6eed9dd1
m30000| Wed Jun 13 22:34:11 [conn5] moveChunk number of documents: 1
m27000| Wed Jun 13 22:34:11 [initandlisten] connection accepted from 127.0.0.1:42560 #5 (4 connections now open)
m27000| Wed Jun 13 22:34:11 [conn5] authenticate db: local { authenticate: 1, nonce: "2d1ad9642ee0178d", user: "__system", key: "ec028a35da9ecb469090b27e2c4ce9e6" }
m30000| Wed Jun 13 22:34:11 [initandlisten] connection accepted from 127.0.0.1:42211 #10 (10 connections now open)
m30000| Wed Jun 13 22:34:11 [conn10] authenticate db: local { authenticate: 1, nonce: "2d7460bf7e25f194", user: "__system", key: "02adbd84ea893c2ab7dfb8eb21bf78c8" }
m27000| Wed Jun 13 22:34:11 [migrateThread] opening db: foo
m27000| Wed Jun 13 22:34:11 [FileAllocator] allocating new datafile /data/db/mongod-27000/foo.ns, filling with zeroes...
m27000| Wed Jun 13 22:34:11 [FileAllocator] creating directory /data/db/mongod-27000/_tmp
m27000| Wed Jun 13 22:34:11 [FileAllocator] done allocating datafile /data/db/mongod-27000/foo.ns, size: 16MB, took 0.038 secs
m27000| Wed Jun 13 22:34:11 [FileAllocator] allocating new datafile /data/db/mongod-27000/foo.0, filling with zeroes...
m27000| Wed Jun 13 22:34:11 [FileAllocator] done allocating datafile /data/db/mongod-27000/foo.0, size: 16MB, took 0.034 secs
m27000| Wed Jun 13 22:34:11 [migrateThread] datafileheader::init initializing /data/db/mongod-27000/foo.0 n:0
m27000| Wed Jun 13 22:34:11 [FileAllocator] allocating new datafile /data/db/mongod-27000/foo.1, filling with zeroes...
m27000| Wed Jun 13 22:34:11 [migrateThread] build index foo.bar { _id: 1 }
m27000| Wed Jun 13 22:34:11 [migrateThread] build index done. scanned 0 total records. 0 secs
m27000| Wed Jun 13 22:34:11 [migrateThread] info: creating collection foo.bar on add index
m27000| Wed Jun 13 22:34:11 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 1.0 } -> { _id: 2.0 }
m27000| Wed Jun 13 22:34:11 [FileAllocator] done allocating datafile /data/db/mongod-27000/foo.1, size: 32MB, took 0.065 secs
m30000| Wed Jun 13 22:34:12 [conn5] moveChunk data transfer progress: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 18, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
m30000| Wed Jun 13 22:34:12 [conn5] moveChunk setting version to: 2|0||4fd95bb263e4ab8b6eed9dd1
m27000| Wed Jun 13 22:34:12 [migrateThread] migrate commit succeeded flushing to secondaries for 'foo.bar' { _id: 1.0 } -> { _id: 2.0 }
m27000| Wed Jun 13 22:34:12 [migrateThread] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:34:12-0", server: "tp2.10gen.cc", clientAddr: ":27017", time: new Date(1339644852483), what: "moveChunk.to", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 5: 83, step2 of 5: 0, step3 of 5: 1, step4 of 5: 0, step5 of 5: 922 } }
m30000| Wed Jun 13 22:34:12 [initandlisten] connection accepted from 127.0.0.1:42212 #11 (11 connections now open)
m30000| Wed Jun 13 22:34:12 [conn11] authenticate db: local { authenticate: 1, nonce: "6133fd0741aee63f", user: "__system", key: "3c828089a42c55474de0bf20d612187b" }
m30000| Wed Jun 13 22:34:12 [conn5] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "foo.bar", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 18, catchup: 0, steady: 0 }, ok: 1.0 }
m30000| Wed Jun 13 22:34:12 [conn5] moveChunk updating self version to: 2|1||4fd95bb263e4ab8b6eed9dd1 through { _id: MinKey } -> { _id: 0.0 } for collection 'foo.bar'
m30000| Wed Jun 13 22:34:12 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:34:12-5", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:42196", time: new Date(1339644852486), what: "moveChunk.commit", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0000", to: "shard0001" } }
m30000| Wed Jun 13 22:34:12 [conn5] doing delete inline
m30000| Wed Jun 13 22:34:12 [conn5] moveChunk deleted: 1
m30000| Wed Jun 13 22:34:12 [conn5] distributed lock 'foo.bar/tp2.10gen.cc:30000:1339644851:1096328315' unlocked.
m30000| Wed Jun 13 22:34:12 [conn5] about to log metadata event: { _id: "tp2.10gen.cc-2012-06-14T03:34:12-6", server: "tp2.10gen.cc", clientAddr: "127.0.0.1:42196", time: new Date(1339644852495), what: "moveChunk.from", ns: "foo.bar", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 1000, step5 of 6: 9, step6 of 6: 8 } }
m30000| Wed Jun 13 22:34:12 [conn5] command admin.$cmd command: { moveChunk: "foo.bar", from: "localhost:30000", to: "localhost:27000", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 52428800, shardId: "foo.bar-_id_1.0", configdb: "localhost:30000" } ntoreturn:1 keyUpdates:0 locks(micros) R:10 r:6568 w:60313 reslen:37 1024ms
m30999| Wed Jun 13 22:34:12 [conn] moveChunk result: { ok: 1.0 }
m30999| Wed Jun 13 22:34:12 [conn] ChunkManager: time to load chunks for foo.bar: 11ms sequenceNumber: 7 version: 2|1||4fd95bb263e4ab8b6eed9dd1 based on: 1|8||4fd95bb263e4ab8b6eed9dd1
{ "millis" : 1037, "ok" : 1 }
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
shards:
{ "_id" : "shard0000", "host" : "localhost:30000" }
{ "_id" : "shard0001", "host" : "localhost:27000" }
databases:
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "foo", "partitioned" : true, "primary" : "shard0000" }
foo.bar chunks:
shard0000 4
shard0001 1
{ "_id" : { $minKey : 1 } } -->> { "_id" : 0 } on : shard0000 Timestamp(2000, 1)
{ "_id" : 0 } -->> { "_id" : 1 } on : shard0000 Timestamp(1000, 3)
{ "_id" : 1 } -->> { "_id" : 2 } on : shard0001 Timestamp(2000, 0)
{ "_id" : 2 } -->> { "_id" : 3 } on : shard0000 Timestamp(1000, 7)
{ "_id" : 3 } -->> { "_id" : { $maxKey : 1 } } on : shard0000 Timestamp(1000, 8)
m30999| Wed Jun 13 22:34:18 [Balancer] Refreshing MaxChunkSize: 50
m30999| Wed Jun 13 22:34:18 [Balancer] creating new connection to:localhost:27000
m30999| Wed Jun 13 22:34:18 BackgroundJob starting: ConnectBG
m27000| Wed Jun 13 22:34:18 [initandlisten] connection accepted from 127.0.0.1:42563 #6 (5 connections now open)
m30999| Wed Jun 13 22:34:18 [Balancer] connected connection!
m27000| Wed Jun 13 22:34:18 [conn6] authenticate db: local { authenticate: 1, nonce: "e9894d2e3d952d18", user: "__system", key: "cb01fce8eac0158b9711b96fa1b83654" }
m30999| Wed Jun 13 22:34:18 [Balancer] about to acquire distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383:
m30999| { "state" : 1,
m30999| "who" : "tp2.10gen.cc:30999:1339644848:1804289383:Balancer:846930886",
m30999| "process" : "tp2.10gen.cc:30999:1339644848:1804289383",
m30999| "when" : { "$date" : "Wed Jun 13 22:34:18 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd95bba63e4ab8b6eed9dd2" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd95bb063e4ab8b6eed9dcf" } }
m30999| Wed Jun 13 22:34:18 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' acquired, ts : 4fd95bba63e4ab8b6eed9dd2
m30999| Wed Jun 13 22:34:18 [Balancer] *** start balancing round
m30999| Wed Jun 13 22:34:18 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:34:18 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:34:18 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:34:18 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:34:18 [Balancer] shard0000
m30999| Wed Jun 13 22:34:18 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:34:18 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:34:18 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:34:18 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Wed Jun 13 22:34:18 [Balancer] shard0001
m30999| Wed Jun 13 22:34:18 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:34:18 [Balancer] ----
m30999| Wed Jun 13 22:34:18 [Balancer] collection : foo.bar
m30999| Wed Jun 13 22:34:18 [Balancer] donor : 4 chunks on shard0000
m30999| Wed Jun 13 22:34:18 [Balancer] receiver : 1 chunks on shard0001
m30999| Wed Jun 13 22:34:18 [Balancer] chose [shard0000] to [shard0001] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:34:18 [Balancer] DBConfig unserialize: foo { _id: "foo", partitioned: true, primary: "shard0000" }
m30999| Wed Jun 13 22:34:18 [Balancer] ChunkManager: time to load chunks for foo.bar: 1ms sequenceNumber: 8 version: 2|1||4fd95bb263e4ab8b6eed9dd1 based on: (empty)
m30999| Wed Jun 13 22:34:18 [Balancer] User Assertion: 10199:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:34:18 [Balancer] dev: lastError==0 won't report:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:34:18 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' unlocked.
m30999| Wed Jun 13 22:34:18 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Wed Jun 13 22:34:18 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:34:18 [Balancer] *** End of balancing round
m30000| Wed Jun 13 22:34:18 [conn5] end connection 127.0.0.1:42196 (10 connections now open)
m30999| Wed Jun 13 22:34:18 [conn] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
m30999| Wed Jun 13 22:34:18 [conn] going to start draining shard: shard0001
m30999| primaryLocalDoc: { _id: "local", primary: "shard0001" }
m30999| Wed Jun 13 22:34:18 [conn] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:34:18 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:34:18 [conn] connected connection!
m30000| Wed Jun 13 22:34:18 [initandlisten] connection accepted from 127.0.0.1:42214 #12 (11 connections now open)
m30000| Wed Jun 13 22:34:18 [conn12] authenticate db: local { authenticate: 1, nonce: "a6550ff55221f71c", user: "__system", key: "0005c3d634c16cfc189a77e6942fbe39" }
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "shard0001",
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:34:38 [LockPinger] cluster localhost:30000 pinged successfully at Wed Jun 13 22:34:38 2012 by distributed lock pinger 'localhost:30000/tp2.10gen.cc:30999:1339644848:1804289383', sleeping for 30000ms
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:34:48 [conn] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:34:48 BackgroundJob starting: ConnectBG
m30000| Wed Jun 13 22:34:48 [initandlisten] connection accepted from 127.0.0.1:42221 #13 (12 connections now open)
m30999| Wed Jun 13 22:34:48 [conn] connected connection!
m30000| Wed Jun 13 22:34:48 [conn13] authenticate db: local { authenticate: 1, nonce: "842209440283a50b", user: "__system", key: "9c708e159b7e5b5b0fc321d85286cae2" }
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:34:48 [Balancer] Refreshing MaxChunkSize: 50
m30999| Wed Jun 13 22:34:48 [Balancer] about to acquire distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383:
m30999| { "state" : 1,
m30999| "who" : "tp2.10gen.cc:30999:1339644848:1804289383:Balancer:846930886",
m30999| "process" : "tp2.10gen.cc:30999:1339644848:1804289383",
m30999| "when" : { "$date" : "Wed Jun 13 22:34:48 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd95bd863e4ab8b6eed9dd3" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd95bba63e4ab8b6eed9dd2" } }
m30999| Wed Jun 13 22:34:48 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' acquired, ts : 4fd95bd863e4ab8b6eed9dd3
m30999| Wed Jun 13 22:34:48 [Balancer] *** start balancing round
m30999| Wed Jun 13 22:34:48 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:34:48 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:34:48 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 1 hasOpsQueued: 0
m30999| Wed Jun 13 22:34:48 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:34:48 [Balancer] shard0000
m30999| Wed Jun 13 22:34:48 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:34:48 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:34:48 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:34:48 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Wed Jun 13 22:34:48 [Balancer] shard0001
m30999| Wed Jun 13 22:34:48 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:34:48 [Balancer] ----
m30999| Wed Jun 13 22:34:48 [Balancer] collection : foo.bar
m30999| Wed Jun 13 22:34:48 [Balancer] donor : 4 chunks on shard0000
m30999| Wed Jun 13 22:34:48 [Balancer] receiver : 4 chunks on shard0000
m30999| Wed Jun 13 22:34:48 [Balancer] draining : 1(1)
m30999| Wed Jun 13 22:34:48 [Balancer] chose [shard0001] to [shard0000] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:34:48 [Balancer] User Assertion: 10199:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:34:48 [Balancer] dev: lastError==0 won't report:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:34:48 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' unlocked.
m30999| Wed Jun 13 22:34:48 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Wed Jun 13 22:34:48 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:34:48 [Balancer] *** End of balancing round
m30000| Wed Jun 13 22:34:48 [conn6] end connection 127.0.0.1:42197 (11 connections now open)
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
Wed Jun 13 22:35:06 [clientcursormon] mem (MB) res:20 virt:129 mapped:0
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30000| Wed Jun 13 22:35:08 [clientcursormon] mem (MB) res:70 virt:247 mapped:96
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:35:08 [LockPinger] cluster localhost:30000 pinged successfully at Wed Jun 13 22:35:08 2012 by distributed lock pinger 'localhost:30000/tp2.10gen.cc:30999:1339644848:1804289383', sleeping for 30000ms
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m27000| Wed Jun 13 22:35:10 [clientcursormon] mem (MB) res:36 virt:179 mapped:32
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:35:18 [Balancer] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:35:18 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:35:18 [Balancer] connected connection!
m30000| Wed Jun 13 22:35:18 [initandlisten] connection accepted from 127.0.0.1:42222 #14 (12 connections now open)
m30000| Wed Jun 13 22:35:18 [conn14] authenticate db: local { authenticate: 1, nonce: "47bb1f7725afff5f", user: "__system", key: "a31461f9b8edd898f75f509b10e9cdaa" }
m30999| Wed Jun 13 22:35:18 [Balancer] Refreshing MaxChunkSize: 50
m30999| Wed Jun 13 22:35:18 [Balancer] about to acquire distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383:
m30999| { "state" : 1,
m30999| "who" : "tp2.10gen.cc:30999:1339644848:1804289383:Balancer:846930886",
m30999| "process" : "tp2.10gen.cc:30999:1339644848:1804289383",
m30999| "when" : { "$date" : "Wed Jun 13 22:35:18 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd95bf663e4ab8b6eed9dd4" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd95bd863e4ab8b6eed9dd3" } }
m30999| Wed Jun 13 22:35:18 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' acquired, ts : 4fd95bf663e4ab8b6eed9dd4
m30999| Wed Jun 13 22:35:18 [Balancer] *** start balancing round
m30999| Wed Jun 13 22:35:18 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:35:18 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:35:18 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 1 hasOpsQueued: 0
m30999| Wed Jun 13 22:35:18 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:35:18 [Balancer] shard0000
m30999| Wed Jun 13 22:35:18 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:35:18 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:35:18 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:35:18 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Wed Jun 13 22:35:18 [Balancer] shard0001
m30999| Wed Jun 13 22:35:18 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:35:18 [Balancer] ----
m30999| Wed Jun 13 22:35:18 [Balancer] collection : foo.bar
m30999| Wed Jun 13 22:35:18 [Balancer] donor : 4 chunks on shard0000
m30999| Wed Jun 13 22:35:18 [Balancer] receiver : 4 chunks on shard0000
m30999| Wed Jun 13 22:35:18 [Balancer] draining : 1(1)
m30999| Wed Jun 13 22:35:18 [Balancer] chose [shard0001] to [shard0000] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:35:18 [Balancer] User Assertion: 10199:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:35:18 [Balancer] dev: lastError==0 won't report:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:35:18 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' unlocked.
m30999| Wed Jun 13 22:35:18 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Wed Jun 13 22:35:18 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30000| Wed Jun 13 22:35:18 [conn12] end connection 127.0.0.1:42214 (11 connections now open)
m30999| Wed Jun 13 22:35:18 [Balancer] *** End of balancing round
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:35:38 [LockPinger] cluster localhost:30000 pinged successfully at Wed Jun 13 22:35:38 2012 by distributed lock pinger 'localhost:30000/tp2.10gen.cc:30999:1339644848:1804289383', sleeping for 30000ms
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:35:48 [Balancer] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:35:48 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:35:48 [Balancer] connected connection!
m30000| Wed Jun 13 22:35:48 [initandlisten] connection accepted from 127.0.0.1:42223 #15 (12 connections now open)
m30000| Wed Jun 13 22:35:48 [conn15] authenticate db: local { authenticate: 1, nonce: "5ddfb3a8c43f8805", user: "__system", key: "992638c82bc0db0d4c20b54ad7111181" }
m30999| Wed Jun 13 22:35:48 [Balancer] Refreshing MaxChunkSize: 50
m30999| Wed Jun 13 22:35:48 [Balancer] about to acquire distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383:
m30999| { "state" : 1,
m30999| "who" : "tp2.10gen.cc:30999:1339644848:1804289383:Balancer:846930886",
m30999| "process" : "tp2.10gen.cc:30999:1339644848:1804289383",
m30999| "when" : { "$date" : "Wed Jun 13 22:35:48 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd95c1463e4ab8b6eed9dd5" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd95bf663e4ab8b6eed9dd4" } }
m30999| Wed Jun 13 22:35:48 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' acquired, ts : 4fd95c1463e4ab8b6eed9dd5
m30999| Wed Jun 13 22:35:48 [Balancer] *** start balancing round
m30999| Wed Jun 13 22:35:48 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:35:48 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:35:48 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 1 hasOpsQueued: 0
m30999| Wed Jun 13 22:35:48 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:35:48 [Balancer] shard0000
m30999| Wed Jun 13 22:35:48 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:35:48 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:35:48 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:35:48 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Wed Jun 13 22:35:48 [Balancer] shard0001
m30999| Wed Jun 13 22:35:48 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:35:48 [Balancer] ----
m30999| Wed Jun 13 22:35:48 [Balancer] collection : foo.bar
m30999| Wed Jun 13 22:35:48 [Balancer] donor : 4 chunks on shard0000
m30999| Wed Jun 13 22:35:48 [Balancer] receiver : 4 chunks on shard0000
m30999| Wed Jun 13 22:35:48 [Balancer] draining : 1(1)
m30999| Wed Jun 13 22:35:48 [Balancer] chose [shard0001] to [shard0000] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:35:48 [Balancer] User Assertion: 10199:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:35:48 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' unlocked.
m30999| Wed Jun 13 22:35:48 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Wed Jun 13 22:35:48 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:35:48 [Balancer] *** End of balancing round
m30000| Wed Jun 13 22:35:48 [conn14] end connection 127.0.0.1:42222 (11 connections now open)
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:36:08 [LockPinger] cluster localhost:30000 pinged successfully at Wed Jun 13 22:36:08 2012 by distributed lock pinger 'localhost:30000/tp2.10gen.cc:30999:1339644848:1804289383', sleeping for 30000ms
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:36:18 [Balancer] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:36:18 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:36:18 [Balancer] connected connection!
m30000| Wed Jun 13 22:36:18 [initandlisten] connection accepted from 127.0.0.1:42224 #16 (12 connections now open)
m30000| Wed Jun 13 22:36:18 [conn16] authenticate db: local { authenticate: 1, nonce: "6d68d5310b336bcf", user: "__system", key: "0c9d887c6542a7743fb04c7050a764d6" }
m30999| Wed Jun 13 22:36:18 [Balancer] Refreshing MaxChunkSize: 50
m30999| Wed Jun 13 22:36:18 [Balancer] about to acquire distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383:
m30999| { "state" : 1,
m30999| "who" : "tp2.10gen.cc:30999:1339644848:1804289383:Balancer:846930886",
m30999| "process" : "tp2.10gen.cc:30999:1339644848:1804289383",
m30999| "when" : { "$date" : "Wed Jun 13 22:36:18 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd95c3263e4ab8b6eed9dd6" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd95c1463e4ab8b6eed9dd5" } }
m30999| Wed Jun 13 22:36:18 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' acquired, ts : 4fd95c3263e4ab8b6eed9dd6
m30999| Wed Jun 13 22:36:18 [Balancer] *** start balancing round
m30999| Wed Jun 13 22:36:18 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:36:18 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:36:18 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 1 hasOpsQueued: 0
m30999| Wed Jun 13 22:36:18 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:36:18 [Balancer] shard0000
m30999| Wed Jun 13 22:36:18 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:36:18 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:36:18 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:36:18 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Wed Jun 13 22:36:18 [Balancer] shard0001
m30999| Wed Jun 13 22:36:18 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:36:18 [Balancer] ----
m30999| Wed Jun 13 22:36:18 [Balancer] collection : foo.bar
m30999| Wed Jun 13 22:36:18 [Balancer] donor : 4 chunks on shard0000
m30999| Wed Jun 13 22:36:18 [Balancer] receiver : 4 chunks on shard0000
m30999| Wed Jun 13 22:36:18 [Balancer] draining : 1(1)
m30999| Wed Jun 13 22:36:18 [Balancer] chose [shard0001] to [shard0000] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:36:18 [Balancer] User Assertion: 10199:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:36:18 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' unlocked.
m30999| Wed Jun 13 22:36:18 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Wed Jun 13 22:36:18 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:36:18 [Balancer] *** End of balancing round
m30000| Wed Jun 13 22:36:18 [conn15] end connection 127.0.0.1:42223 (11 connections now open)
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:36:38 [LockPinger] cluster localhost:30000 pinged successfully at Wed Jun 13 22:36:38 2012 by distributed lock pinger 'localhost:30000/tp2.10gen.cc:30999:1339644848:1804289383', sleeping for 30000ms
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:36:48 [Balancer] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:36:48 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:36:48 [Balancer] connected connection!
m30000| Wed Jun 13 22:36:48 [initandlisten] connection accepted from 127.0.0.1:42225 #17 (12 connections now open)
m30000| Wed Jun 13 22:36:48 [conn17] authenticate db: local { authenticate: 1, nonce: "8493286d62211df1", user: "__system", key: "4ac85bcce7a243f3bb4f5c5475f1d87a" }
m30999| Wed Jun 13 22:36:48 [Balancer] Refreshing MaxChunkSize: 50
m30999| Wed Jun 13 22:36:48 [Balancer] about to acquire distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383:
m30999| { "state" : 1,
m30999| "who" : "tp2.10gen.cc:30999:1339644848:1804289383:Balancer:846930886",
m30999| "process" : "tp2.10gen.cc:30999:1339644848:1804289383",
m30999| "when" : { "$date" : "Wed Jun 13 22:36:48 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd95c5063e4ab8b6eed9dd7" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd95c3263e4ab8b6eed9dd6" } }
m30999| Wed Jun 13 22:36:48 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' acquired, ts : 4fd95c5063e4ab8b6eed9dd7
m30999| Wed Jun 13 22:36:48 [Balancer] *** start balancing round
m30999| Wed Jun 13 22:36:48 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:36:48 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:36:48 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 1 hasOpsQueued: 0
m30999| Wed Jun 13 22:36:48 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:36:48 [Balancer] shard0000
m30999| Wed Jun 13 22:36:48 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:36:48 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:36:48 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:36:48 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Wed Jun 13 22:36:48 [Balancer] shard0001
m30999| Wed Jun 13 22:36:48 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:36:48 [Balancer] ----
m30999| Wed Jun 13 22:36:48 [Balancer] collection : foo.bar
m30999| Wed Jun 13 22:36:48 [Balancer] donor : 4 chunks on shard0000
m30999| Wed Jun 13 22:36:48 [Balancer] receiver : 4 chunks on shard0000
m30999| Wed Jun 13 22:36:48 [Balancer] draining : 1(1)
m30999| Wed Jun 13 22:36:48 [Balancer] chose [shard0001] to [shard0000] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:36:48 [Balancer] User Assertion: 10199:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:36:48 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' unlocked.
m30999| Wed Jun 13 22:36:48 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Wed Jun 13 22:36:48 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:36:48 [Balancer] *** End of balancing round
m30000| Wed Jun 13 22:36:48 [conn16] end connection 127.0.0.1:42224 (11 connections now open)
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:37:08 [LockPinger] cluster localhost:30000 pinged successfully at Wed Jun 13 22:37:08 2012 by distributed lock pinger 'localhost:30000/tp2.10gen.cc:30999:1339644848:1804289383', sleeping for 30000ms
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:37:18 [Balancer] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:37:18 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:37:18 [Balancer] connected connection!
m30000| Wed Jun 13 22:37:18 [initandlisten] connection accepted from 127.0.0.1:42226 #18 (12 connections now open)
m30000| Wed Jun 13 22:37:18 [conn18] authenticate db: local { authenticate: 1, nonce: "ffe1f7895e1dc208", user: "__system", key: "34fbc548039a3b75c1d369b46d8f2865" }
m30999| Wed Jun 13 22:37:18 [Balancer] Refreshing MaxChunkSize: 50
m30999| Wed Jun 13 22:37:18 [Balancer] about to acquire distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383:
m30999| { "state" : 1,
m30999| "who" : "tp2.10gen.cc:30999:1339644848:1804289383:Balancer:846930886",
m30999| "process" : "tp2.10gen.cc:30999:1339644848:1804289383",
m30999| "when" : { "$date" : "Wed Jun 13 22:37:18 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd95c6e63e4ab8b6eed9dd8" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd95c5063e4ab8b6eed9dd7" } }
m30999| Wed Jun 13 22:37:18 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' acquired, ts : 4fd95c6e63e4ab8b6eed9dd8
m30999| Wed Jun 13 22:37:18 [Balancer] *** start balancing round
m30999| Wed Jun 13 22:37:18 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:37:18 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:37:18 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 1 hasOpsQueued: 0
m30999| Wed Jun 13 22:37:18 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:37:18 [Balancer] shard0000
m30999| Wed Jun 13 22:37:18 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:37:18 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:37:18 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:37:18 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Wed Jun 13 22:37:18 [Balancer] shard0001
m30999| Wed Jun 13 22:37:18 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:37:18 [Balancer] ----
m30999| Wed Jun 13 22:37:18 [Balancer] collection : foo.bar
m30999| Wed Jun 13 22:37:18 [Balancer] donor : 4 chunks on shard0000
m30999| Wed Jun 13 22:37:18 [Balancer] receiver : 4 chunks on shard0000
m30999| Wed Jun 13 22:37:18 [Balancer] draining : 1(1)
m30999| Wed Jun 13 22:37:18 [Balancer] chose [shard0001] to [shard0000] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:37:18 [Balancer] User Assertion: 10199:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:37:18 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' unlocked.
m30999| Wed Jun 13 22:37:18 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Wed Jun 13 22:37:18 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:37:18 [Balancer] *** End of balancing round
m30000| Wed Jun 13 22:37:18 [conn17] end connection 127.0.0.1:42225 (11 connections now open)
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:37:38 [LockPinger] cluster localhost:30000 pinged successfully at Wed Jun 13 22:37:38 2012 by distributed lock pinger 'localhost:30000/tp2.10gen.cc:30999:1339644848:1804289383', sleeping for 30000ms
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:37:48 [Balancer] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:37:48 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:37:48 [Balancer] connected connection!
m30000| Wed Jun 13 22:37:48 [initandlisten] connection accepted from 127.0.0.1:42227 #19 (12 connections now open)
m30000| Wed Jun 13 22:37:48 [conn19] authenticate db: local { authenticate: 1, nonce: "d0e2effd1b021ae3", user: "__system", key: "0417c2966ed67b5cd4588c2aa624f141" }
m30999| Wed Jun 13 22:37:48 [Balancer] Refreshing MaxChunkSize: 50
m30999| Wed Jun 13 22:37:48 [Balancer] about to acquire distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383:
m30999| { "state" : 1,
m30999| "who" : "tp2.10gen.cc:30999:1339644848:1804289383:Balancer:846930886",
m30999| "process" : "tp2.10gen.cc:30999:1339644848:1804289383",
m30999| "when" : { "$date" : "Wed Jun 13 22:37:48 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd95c8c63e4ab8b6eed9dd9" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd95c6e63e4ab8b6eed9dd8" } }
m30999| Wed Jun 13 22:37:48 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' acquired, ts : 4fd95c8c63e4ab8b6eed9dd9
m30999| Wed Jun 13 22:37:48 [Balancer] *** start balancing round
m30999| Wed Jun 13 22:37:48 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:37:48 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:37:48 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 1 hasOpsQueued: 0
m30999| Wed Jun 13 22:37:48 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:37:48 [Balancer] shard0000
m30999| Wed Jun 13 22:37:48 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:37:48 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:37:48 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:37:48 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Wed Jun 13 22:37:48 [Balancer] shard0001
m30999| Wed Jun 13 22:37:48 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:37:48 [Balancer] ----
m30999| Wed Jun 13 22:37:48 [Balancer] collection : foo.bar
m30999| Wed Jun 13 22:37:48 [Balancer] donor : 4 chunks on shard0000
m30999| Wed Jun 13 22:37:48 [Balancer] receiver : 4 chunks on shard0000
m30999| Wed Jun 13 22:37:48 [Balancer] draining : 1(1)
m30999| Wed Jun 13 22:37:48 [Balancer] chose [shard0001] to [shard0000] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:37:48 [Balancer] User Assertion: 10199:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:37:48 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' unlocked.
m30999| Wed Jun 13 22:37:48 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Wed Jun 13 22:37:48 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:37:48 [Balancer] *** End of balancing round
m30000| Wed Jun 13 22:37:48 [conn18] end connection 127.0.0.1:42226 (11 connections now open)
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:38:08 [LockPinger] cluster localhost:30000 pinged successfully at Wed Jun 13 22:38:08 2012 by distributed lock pinger 'localhost:30000/tp2.10gen.cc:30999:1339644848:1804289383', sleeping for 30000ms
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:38:18 [Balancer] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:38:18 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:38:18 [Balancer] connected connection!
m30000| Wed Jun 13 22:38:18 [initandlisten] connection accepted from 127.0.0.1:35974 #20 (12 connections now open)
m30000| Wed Jun 13 22:38:18 [conn20] authenticate db: local { authenticate: 1, nonce: "d3267b8111337ef6", user: "__system", key: "6b96c0e71a39dbcf4a861cb2a6e2ff6d" }
m30999| Wed Jun 13 22:38:18 [Balancer] Refreshing MaxChunkSize: 50
m30999| Wed Jun 13 22:38:18 [Balancer] about to acquire distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383:
m30999| { "state" : 1,
m30999| "who" : "tp2.10gen.cc:30999:1339644848:1804289383:Balancer:846930886",
m30999| "process" : "tp2.10gen.cc:30999:1339644848:1804289383",
m30999| "when" : { "$date" : "Wed Jun 13 22:38:18 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd95caa63e4ab8b6eed9dda" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd95c8c63e4ab8b6eed9dd9" } }
m30999| Wed Jun 13 22:38:18 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' acquired, ts : 4fd95caa63e4ab8b6eed9dda
m30999| Wed Jun 13 22:38:18 [Balancer] *** start balancing round
m30999| Wed Jun 13 22:38:18 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:38:18 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:38:18 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 1 hasOpsQueued: 0
m30999| Wed Jun 13 22:38:18 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:38:18 [Balancer] shard0000
m30999| Wed Jun 13 22:38:18 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:38:18 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:38:18 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:38:18 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Wed Jun 13 22:38:18 [Balancer] shard0001
m30999| Wed Jun 13 22:38:18 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:38:18 [Balancer] ----
m30999| Wed Jun 13 22:38:18 [Balancer] collection : foo.bar
m30999| Wed Jun 13 22:38:18 [Balancer] donor : 4 chunks on shard0000
m30999| Wed Jun 13 22:38:18 [Balancer] receiver : 4 chunks on shard0000
m30999| Wed Jun 13 22:38:18 [Balancer] draining : 1(1)
m30999| Wed Jun 13 22:38:18 [Balancer] chose [shard0001] to [shard0000] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:38:18 [Balancer] User Assertion: 10199:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:38:18 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' unlocked.
m30999| Wed Jun 13 22:38:18 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Wed Jun 13 22:38:18 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:38:18 [Balancer] *** End of balancing round
m30000| Wed Jun 13 22:38:18 [conn19] end connection 127.0.0.1:42227 (11 connections now open)
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:38:38 [LockPinger] cluster localhost:30000 pinged successfully at Wed Jun 13 22:38:38 2012 by distributed lock pinger 'localhost:30000/tp2.10gen.cc:30999:1339644848:1804289383', sleeping for 30000ms
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30000| Wed Jun 13 22:38:41 [LockPinger] cluster localhost:30000 pinged successfully at Wed Jun 13 22:38:41 2012 by distributed lock pinger 'localhost:30000/tp2.10gen.cc:30000:1339644851:1096328315', sleeping for 30000ms
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:38:48 [Balancer] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:38:48 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:38:48 [Balancer] connected connection!
m30000| Wed Jun 13 22:38:48 [initandlisten] connection accepted from 127.0.0.1:35975 #21 (12 connections now open)
m30000| Wed Jun 13 22:38:48 [conn21] authenticate db: local { authenticate: 1, nonce: "4d5635ac07683a9c", user: "__system", key: "ec43b69a96c6e93df624ff6fddc72dcb" }
m30999| Wed Jun 13 22:38:48 [Balancer] Refreshing MaxChunkSize: 50
m30999| Wed Jun 13 22:38:48 [Balancer] about to acquire distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383:
m30999| { "state" : 1,
m30999| "who" : "tp2.10gen.cc:30999:1339644848:1804289383:Balancer:846930886",
m30999| "process" : "tp2.10gen.cc:30999:1339644848:1804289383",
m30999| "when" : { "$date" : "Wed Jun 13 22:38:48 2012" },
m30999| "why" : "doing balance round",
m30999| "ts" : { "$oid" : "4fd95cc863e4ab8b6eed9ddb" } }
m30999| { "_id" : "balancer",
m30999| "state" : 0,
m30999| "ts" : { "$oid" : "4fd95caa63e4ab8b6eed9dda" } }
m30999| Wed Jun 13 22:38:48 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' acquired, ts : 4fd95cc863e4ab8b6eed9ddb
m30999| Wed Jun 13 22:38:48 [Balancer] *** start balancing round
m30999| Wed Jun 13 22:38:48 [Balancer] ---- ShardInfoMap
m30999| Wed Jun 13 22:38:48 [Balancer] shard0000 maxSize: 0 currSize: 96 draining: 0 hasOpsQueued: 0
m30999| Wed Jun 13 22:38:48 [Balancer] shard0001 maxSize: 0 currSize: 32 draining: 1 hasOpsQueued: 0
m30999| Wed Jun 13 22:38:48 [Balancer] ---- ShardToChunksMap
m30999| Wed Jun 13 22:38:48 [Balancer] shard0000
m30999| Wed Jun 13 22:38:48 [Balancer] { _id: "foo.bar-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:38:48 [Balancer] { _id: "foo.bar-_id_0.0", lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:38:48 [Balancer] { _id: "foo.bar-_id_2.0", lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" }
m30999| Wed Jun 13 22:38:48 [Balancer] { _id: "foo.bar-_id_3.0", lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 3.0 }, max: { _id: MaxKey }, shard: "shard0000" }
m30999| Wed Jun 13 22:38:48 [Balancer] shard0001
m30999| Wed Jun 13 22:38:48 [Balancer] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:38:48 [Balancer] ----
m30999| Wed Jun 13 22:38:48 [Balancer] collection : foo.bar
m30999| Wed Jun 13 22:38:48 [Balancer] donor : 4 chunks on shard0000
m30999| Wed Jun 13 22:38:48 [Balancer] receiver : 4 chunks on shard0000
m30999| Wed Jun 13 22:38:48 [Balancer] draining : 1(1)
m30999| Wed Jun 13 22:38:48 [Balancer] chose [shard0001] to [shard0000] { _id: "foo.bar-_id_1.0", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('4fd95bb263e4ab8b6eed9dd1'), ns: "foo.bar", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" }
m30999| Wed Jun 13 22:38:48 [Balancer] User Assertion: 10199:right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:38:48 [Balancer] distributed lock 'balancer/tp2.10gen.cc:30999:1339644848:1804289383' unlocked.
m30999| Wed Jun 13 22:38:48 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Wed Jun 13 22:38:48 [Balancer] caught exception while doing balance: right object ({}) doesn't have full shard key ({ _id: 1.0 })
m30999| Wed Jun 13 22:38:48 [Balancer] *** End of balancing round
m30000| Wed Jun 13 22:38:48 [conn20] end connection 127.0.0.1:35974 (11 connections now open)
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m30999| Wed Jun 13 22:39:08 [LockPinger] cluster localhost:30000 pinged successfully at Wed Jun 13 22:39:08 2012 by distributed lock pinger 'localhost:30000/tp2.10gen.cc:30999:1339644848:1804289383', sleeping for 30000ms
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
m27000| Wed Jun 13 22:39:10 [conn2] command admin.$cmd command: { writebacklisten: ObjectId('4fd95bb063e4ab8b6eed9dce') } ntoreturn:1 keyUpdates:0 locks(micros) R:9 W:230 r:493 reslen:44 300005ms
m30999| Wed Jun 13 22:39:10 [WriteBackListener-localhost:27000] writebacklisten result: { noop: true, ok: 1.0 }
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
{
"msg" : "draining ongoing",
"state" : "ongoing",
"remaining" : {
"chunks" : NumberLong(1),
"dbs" : NumberLong(0)
},
"ok" : 1
}
assert.soon failed: function () {
var result = admin.runCommand({removeShard:conn.host});
printjson(result);
return result.ok && result.state == "completed";
}, msg:failed to drain shard completely
Error("Printing Stack Trace")@:0
()@src/mongo/shell/utils.js:37
("assert.soon failed: function () {\n var result = admin.runCommand({removeShard:conn.host});\n printjson(result);\n return result.ok && result.state == \"completed\";\n}, msg:failed to drain shard completely")@src/mongo/shell/utils.js:58
((function () {var result = admin.runCommand({removeShard:conn.host});printjson(result);return result.ok && result.state == "completed";}),"failed to drain shard completely",300000)@src/mongo/shell/utils.js:167
@/home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/auth_add_shard.js:101
Wed Jun 13 22:39:18 uncaught exception: assert.soon failed: function () {
var result = admin.runCommand({removeShard:conn.host});
printjson(result);
return result.ok && result.state == "completed";
}, msg:failed to drain shard completely
failed to load: /home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/auth_add_shard.js
m27000| Wed Jun 13 22:39:18 got signal 15 (Terminated), will terminate after current cmd ends
m27000| Wed Jun 13 22:39:18 [interruptThread] now exiting
m27000| Wed Jun 13 22:39:18 dbexit:
m27000| Wed Jun 13 22:39:18 [interruptThread] shutdown: going to close listening sockets...
m27000| Wed Jun 13 22:39:18 [interruptThread] closing listening socket: 27
m27000| Wed Jun 13 22:39:18 [interruptThread] closing listening socket: 28
m27000| Wed Jun 13 22:39:18 [interruptThread] closing listening socket: 29
m27000| Wed Jun 13 22:39:18 [interruptThread] removing socket file: /tmp/mongodb-27000.sock
m27000| Wed Jun 13 22:39:18 [interruptThread] shutdown: going to flush diaglog...
m27000| Wed Jun 13 22:39:18 [interruptThread] shutdown: going to close sockets...
m27000| Wed Jun 13 22:39:18 [interruptThread] shutdown: waiting for fs preallocator...
m27000| Wed Jun 13 22:39:18 [interruptThread] shutdown: closing all files...
m30999| Wed Jun 13 22:39:18 [WriteBackListener-localhost:27000] SocketException: remote: 127.0.0.1:27000 error: 9001 socket exception [0] server [127.0.0.1:27000]
m30999| Wed Jun 13 22:39:18 [WriteBackListener-localhost:27000] DBClientCursor::init call() failed
m30000| Wed Jun 13 22:39:18 [conn10] end connection 127.0.0.1:42211 (10 connections now open)
m30999| Wed Jun 13 22:39:18 [WriteBackListener-localhost:27000] User Assertion: 10276:DBClientBase::findN: transport error: localhost:27000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd95bb063e4ab8b6eed9dce') }
m30000| Wed Jun 13 22:39:18 [conn11] end connection 127.0.0.1:42212 (10 connections now open)
m30999| Wed Jun 13 22:39:18 [WriteBackListener-localhost:27000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:27000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd95bb063e4ab8b6eed9dce') }
m27000| Wed Jun 13 22:39:18 [interruptThread] closeAllFiles() finished
m27000| Wed Jun 13 22:39:18 [interruptThread] shutdown: removing fs lock...
m27000| Wed Jun 13 22:39:18 dbexit: really exiting now
m30999| Wed Jun 13 22:39:18 [Balancer] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:39:18 BackgroundJob starting: ConnectBG
m30000| Wed Jun 13 22:39:18 [initandlisten] connection accepted from 127.0.0.1:35976 #22 (10 connections now open)
m30999| Wed Jun 13 22:39:18 [Balancer] connected connection!
m30000| Wed Jun 13 22:39:18 [conn22] authenticate db: local { authenticate: 1, nonce: "8dd6499623d7cac3", user: "__system", key: "5cd2952ab9d801520444b3f53c739c5f" }
m30999| Wed Jun 13 22:39:18 [Balancer] Refreshing MaxChunkSize: 50
m30999| Wed Jun 13 22:39:18 [Balancer] SocketException: remote: 127.0.0.1:27000 error: 9001 socket exception [0] server [127.0.0.1:27000]
m30999| Wed Jun 13 22:39:18 [Balancer] DBClientCursor::init call() failed
m30999| Wed Jun 13 22:39:18 [Balancer] User Assertion: 10276:DBClientBase::findN: transport error: localhost:27000 ns: admin.$cmd query: { features: 1 }
m30999| Wed Jun 13 22:39:18 [Balancer] scoped connection to localhost:30000 not being returned to the pool
m30999| Wed Jun 13 22:39:18 [Balancer] caught exception while doing balance: DBClientBase::findN: transport error: localhost:27000 ns: admin.$cmd query: { features: 1 }
m30999| Wed Jun 13 22:39:18 [Balancer] *** End of balancing round
m30000| Wed Jun 13 22:39:18 [conn21] end connection 127.0.0.1:35975 (9 connections now open)
m30000| Wed Jun 13 22:39:19 got signal 15 (Terminated), will terminate after current cmd ends
m30000| Wed Jun 13 22:39:19 [interruptThread] now exiting
m30000| Wed Jun 13 22:39:19 dbexit:
m30000| Wed Jun 13 22:39:19 [interruptThread] shutdown: going to close listening sockets...
m30000| Wed Jun 13 22:39:19 [interruptThread] closing listening socket: 19
m30000| Wed Jun 13 22:39:19 [interruptThread] closing listening socket: 21
m30000| Wed Jun 13 22:39:19 [interruptThread] closing listening socket: 22
m30000| Wed Jun 13 22:39:19 [interruptThread] removing socket file: /tmp/mongodb-30000.sock
m30000| Wed Jun 13 22:39:19 [interruptThread] shutdown: going to flush diaglog...
m30000| Wed Jun 13 22:39:19 [interruptThread] shutdown: going to close sockets...
m30000| Wed Jun 13 22:39:19 [interruptThread] shutdown: waiting for fs preallocator...
m30000| Wed Jun 13 22:39:19 [interruptThread] shutdown: closing all files...
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [0] server [127.0.0.1:30000]
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:30000] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd95bb063e4ab8b6eed9dce') }
m30000| Wed Jun 13 22:39:19 [conn8] end connection 127.0.0.1:42208 (8 connections now open)
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:30000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd95bb063e4ab8b6eed9dce') }
m30000| Wed Jun 13 22:39:19 [conn9] end connection 127.0.0.1:42209 (7 connections now open)
m30000| Wed Jun 13 22:39:19 [conn22] end connection 127.0.0.1:35976 (7 connections now open)
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:30000] Socket recv() errno:104 Connection reset by peer 127.0.0.1:30000
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [1] server [127.0.0.1:30000]
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:27000] creating new connection to:localhost:27000
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:30000] Assertion: 13632:couldn't get updated shard list from config server
m30999| 0x846782a 0x8676fb1 0x85ef1c0 0x8421b42 0x842011a 0x86e97b1 0x85127d1 0x8515478 0x8515390 0x8515316 0x8515298 0x84569ca 0xd5d919 0x1ecd4e
m30999| Wed Jun 13 22:39:19 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:27000] WriteBackListener exception : socket exception
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:27000] creating new connection to:localhost:30000
m30999| Wed Jun 13 22:39:19 BackgroundJob starting: ConnectBG
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:27000] connected connection!
m30000| Wed Jun 13 22:39:19 [interruptThread] closeAllFiles() finished
m30000| Wed Jun 13 22:39:19 [interruptThread] shutdown: removing fs lock...
m30000| Wed Jun 13 22:39:19 dbexit: really exiting now
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:27000] Socket recv() errno:104 Connection reset by peer 127.0.0.1:30000
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:27000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [1] server [127.0.0.1:30000]
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:27000] DBClientCursor::init call() failed
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:27000] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30000 ns: local.$cmd query: { getnonce: 1 }
m30999| Wed Jun 13 22:39:19 [WriteBackListener-localhost:27000] ERROR: backgroundjob WriteBackListener-localhost:27000error: DBClientBase::findN: transport error: localhost:30000 ns: local.$cmd query: { getnonce: 1 }
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x26) [0x846782a]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5mongo10logContextEPKc+0x5b) [0x8676fb1]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5mongo11msgassertedEiPKc+0xee) [0x85ef1c0]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5mongo15StaticShardInfo6reloadEv+0x196) [0x8421b42]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo/mongos(_ZN5mongo5Shard15reloadShardInfoEv+0x20) [0x842011a]
m30999| /home/yellow/buildslave/Linux_32bit_debug/mongo 313621.879101ms
Wed Jun 13 22:39:21 got signal 15 (Terminated), will terminate after current cmd ends
Wed Jun 13 22:39:21 [interruptThread] now exiting
Wed Jun 13 22:39:21 dbexit:
Wed Jun 13 22:39:21 [interruptThread] shutdown: going to close listening sockets...
Wed Jun 13 22:39:21 [interruptThread] closing listening socket: 6
Wed Jun 13 22:39:21 [interruptThread] closing listening socket: 7
Wed Jun 13 22:39:21 [interruptThread] closing listening socket: 8
Wed Jun 13 22:39:21 [interruptThread] removing socket file: /tmp/mongodb-27999.sock
Wed Jun 13 22:39:21 [interruptThread] shutdown: going to flush diaglog...
Wed Jun 13 22:39:21 [interruptThread] shutdown: going to close sockets...
Wed Jun 13 22:39:21 [interruptThread] shutdown: waiting for fs preallocator...
Wed Jun 13 22:39:21 [interruptThread] shutdown: closing all files...
Wed Jun 13 22:39:21 [interruptThread] closeAllFiles() finished
Wed Jun 13 22:39:21 [interruptThread] shutdown: removing fs lock...
Wed Jun 13 22:39:21 dbexit: really exiting now
test /home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/auth_add_shard.js exited with status 253
Traceback (most recent call last):
File "/home/yellow/buildslave/Linux_32bit_debug/mongo/buildscripts/smoke.py", line 782, in <module>
7 tests succeeded
84 tests didn't get run
The following tests failed (with exit code):
/home/yellow/buildslave/Linux_32bit_debug/mongo/jstests/sharding/auth_add_shard.js 253
main()
File "/home/yellow/buildslave/Linux_32bit_debug/mongo/buildscripts/smoke.py", line 778, in main
report()
File "/home/yellow/buildslave/Linux_32bit_debug/mongo/buildscripts/smoke.py", line 490, in report
raise Exception("Test failures")
Exception: Test failures
scons: *** [smokeSharding] Error 1
scons: building terminated because of errors.