2012-08-09 09:05:36 EDT | Thu Aug 09 13:05:36 [conn40] end connection 127.0.0.1:50585 (0 connections now open) |
2012-08-09 09:07:07 EDT | Thu Aug 09 13:07:07 [initandlisten] connection accepted from 127.0.0.1:50832 #41 (1 connection now open) |
| MongoDB shell version: 2.2.0-rc1-pre- |
| null |
| Resetting db path '/data/db/test0' |
| Thu Aug 09 13:07:08 shell: started program mongod.exe --port 30000 --dbpath /data/db/test0 |
| m30000| Thu Aug 09 13:07:08 [initandlisten] MongoDB starting : pid=5188 port=30000 dbpath=/data/db/test0 64-bit host=AMAZONA-J7UBCUV |
| m30000| Thu Aug 09 13:07:08 [initandlisten] _DEBUG build (which is slower) |
| m30000| Thu Aug 09 13:07:08 [initandlisten] db version v2.2.0-rc1-pre-, pdfile version 4.5 |
| m30000| Thu Aug 09 13:07:08 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 |
| m30000| Thu Aug 09 13:07:08 [initandlisten] git version: 78de2819dca377af2e4c26b1160832336a573126 |
| m30000| Thu Aug 09 13:07:08 [initandlisten] journal dir=/data/db/test0/journal |
| m30000| Thu Aug 09 13:07:08 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 } |
| m30000| Thu Aug 09 13:07:08 [initandlisten] opening db: local |
| m30000| Thu Aug 09 13:07:08 [initandlisten] recover : no journal files present, no recovery needed |
| m30000| Thu Aug 09 13:07:08 [initandlisten] waiting for connections on port 30000 |
| m30000| Thu Aug 09 13:07:08 [initandlisten] connection accepted from 127.0.0.1:50841 #1 (1 connection now open) |
| m30000| Thu Aug 09 13:07:08 [websvr] admin web console waiting for connections on port 31000 |
| Resetting db path '/data/db/test1' |
| Thu Aug 09 13:07:08 shell: started program mongod.exe --port 30001 --dbpath /data/db/test1 |
| m30001| Thu Aug 09 13:07:08 [initandlisten] MongoDB starting : pid=3892 port=30001 dbpath=/data/db/test1 64-bit host=AMAZONA-J7UBCUV |
| m30001| Thu Aug 09 13:07:08 [initandlisten] _DEBUG build (which is slower) |
| m30001| Thu Aug 09 13:07:08 [initandlisten] db version v2.2.0-rc1-pre-, pdfile version 4.5 |
| m30001| Thu Aug 09 13:07:08 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 |
| m30001| Thu Aug 09 13:07:08 [initandlisten] git version: 78de2819dca377af2e4c26b1160832336a573126 |
| m30001| Thu Aug 09 13:07:08 [initandlisten] journal dir=/data/db/test1/journal |
| m30001| Thu Aug 09 13:07:08 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 } |
| m30001| Thu Aug 09 13:07:08 [initandlisten] recover : no journal files present, no recovery needed |
| m30001| Thu Aug 09 13:07:09 [initandlisten] opening db: local |
| m30001| Thu Aug 09 13:07:09 [initandlisten] waiting for connections on port 30001 |
| m30001| Thu Aug 09 13:07:09 [initandlisten] connection accepted from 127.0.0.1:50842 #1 (1 connection now open) |
| m30001| Thu Aug 09 13:07:09 [websvr] admin web console waiting for connections on port 31001 |
| Resetting db path '/data/db/test2' |
| Thu Aug 09 13:07:09 shell: started program mongod.exe --port 30002 --dbpath /data/db/test2 |
| m30002| Thu Aug 09 13:07:09 [initandlisten] MongoDB starting : pid=4088 port=30002 dbpath=/data/db/test2 64-bit host=AMAZONA-J7UBCUV |
| m30002| Thu Aug 09 13:07:09 [initandlisten] _DEBUG build (which is slower) |
| m30002| Thu Aug 09 13:07:09 [initandlisten] db version v2.2.0-rc1-pre-, pdfile version 4.5 |
| m30002| Thu Aug 09 13:07:09 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 |
| m30002| Thu Aug 09 13:07:09 [initandlisten] git version: 78de2819dca377af2e4c26b1160832336a573126 |
| m30002| Thu Aug 09 13:07:09 [initandlisten] journal dir=/data/db/test2/journal |
| m30002| Thu Aug 09 13:07:09 [initandlisten] options: { dbpath: "/data/db/test2", port: 30002 } |
| m30002| Thu Aug 09 13:07:09 [initandlisten] opening db: local |
| m30002| Thu Aug 09 13:07:09 [initandlisten] recover : no journal files present, no recovery needed |
| m30002| Thu Aug 09 13:07:09 [initandlisten] waiting for connections on port 30002 |
| m30002| Thu Aug 09 13:07:09 [initandlisten] connection accepted from 127.0.0.1:50843 #1 (1 connection now open) |
| m30002| Thu Aug 09 13:07:09 [websvr] admin web console waiting for connections on port 31002 |
| Resetting db path '/data/db/test-config0' |
| Thu Aug 09 13:07:09 shell: started program mongod.exe --port 29000 --dbpath /data/db/test-config0 --configsvr |
| m29000| Thu Aug 09 13:07:09 [initandlisten] MongoDB starting : pid=948 port=29000 dbpath=/data/db/test-config0 64-bit host=AMAZONA-J7UBCUV |
| m29000| Thu Aug 09 13:07:09 [initandlisten] _DEBUG build (which is slower) |
| m29000| Thu Aug 09 13:07:09 [initandlisten] db version v2.2.0-rc1-pre-, pdfile version 4.5 |
| m29000| Thu Aug 09 13:07:09 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 |
| m29000| Thu Aug 09 13:07:09 [initandlisten] git version: 78de2819dca377af2e4c26b1160832336a573126 |
| m29000| Thu Aug 09 13:07:09 [initandlisten] journal dir=/data/db/test-config0/journal |
| m29000| Thu Aug 09 13:07:09 [initandlisten] options: { configsvr: true, dbpath: "/data/db/test-config0", port: 29000 } |
| m29000| Thu Aug 09 13:07:09 [initandlisten] recover : no journal files present, no recovery needed |
| m29000| Thu Aug 09 13:07:10 [initandlisten] opening db: local |
| m29000| Thu Aug 09 13:07:10 [initandlisten] waiting for connections on port 29000 |
| m29000| Thu Aug 09 13:07:10 [websvr] ERROR: listen(): bind() failed errno:10048 Only one usage of each socket address (protocol/network address/port) is normally permitted. for socket: 0.0.0.0:30000 |
| m29000| Thu Aug 09 13:07:10 [initandlisten] connection accepted from 127.0.0.1:50844 #1 (1 connection now open) |
| m29000| Thu Aug 09 13:07:10 [websvr] thread websvr stack usage was 19800 bytes, which is the most so far |
| m29000| Thu Aug 09 13:07:10 [initandlisten] connection accepted from 127.0.0.1:50845 #2 (2 connections now open) |
| "localhost:29000" |
| ShardingTest test : |
| "config" : "localhost:29000", |
| "shards" : [ |
| connection to localhost:30000, |
| connection to localhost:30001, |
| connection to localhost:30002 |
| { |
| ] |
| Thu Aug 09 13:07:10 shell: started program mongos.exe --port 30999 --configdb localhost:29000 --chunkSize 1 |
| } |
| m30999| Thu Aug 09 13:07:10 [mongosMain] MongoS version 2.2.0-rc1-pre- starting: pid=3572 port=30999 64-bit host=AMAZONA-J7UBCUV (--help for usage) |
| m30999| Thu Aug 09 13:07:10 [mongosMain] _DEBUG build |
| m30999| Thu Aug 09 13:07:10 [mongosMain] git version: 78de2819dca377af2e4c26b1160832336a573126 |
| m30999| Thu Aug 09 13:07:10 [mongosMain] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 |
| m30999| Thu Aug 09 13:07:10 [mongosMain] options: { chunkSize: 1, configdb: "localhost:29000", port: 30999 } |
| m29000| Thu Aug 09 13:07:10 [initandlisten] connection accepted from 127.0.0.1:50849 #3 (3 connections now open) |
| m29000| Thu Aug 09 13:07:10 [conn3] opening db: config |
| m29000| Thu Aug 09 13:07:10 [initandlisten] connection accepted from 127.0.0.1:50850 #4 (4 connections now open) |
| m29000| Thu Aug 09 13:07:10 [initandlisten] connection accepted from 127.0.0.1:50851 #5 (5 connections now open) |
| m29000| Thu Aug 09 13:07:10 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes... |
| m29000| Thu Aug 09 13:07:10 [FileAllocator] creating directory /data/db/test-config0/_tmp |
| m29000| Thu Aug 09 13:07:10 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0.132 secs |
| m29000| Thu Aug 09 13:07:10 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes... |
| m29000| Thu Aug 09 13:07:10 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0.133 secs |
| m29000| Thu Aug 09 13:07:10 [conn5] datafileheader::init initializing /data/db/test-config0/config.0 n:0 |
| m29000| Thu Aug 09 13:07:10 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes... |
| m29000| Thu Aug 09 13:07:10 [conn5] build index config.version { _id: 1 } |
| m29000| Thu Aug 09 13:07:10 [conn5] build index done. scanned 0 total records. 0.001 secs |
| m29000| Thu Aug 09 13:07:10 [conn5] insert config.version keyUpdates:0 locks(micros) w:278882 280ms |
| m29000| Thu Aug 09 13:07:10 [conn3] build index config.settings { _id: 1 } |
| m30999| Thu Aug 09 13:07:10 [Balancer] about to contact config servers and shards |
| m30999| Thu Aug 09 13:07:10 [websvr] admin web console waiting for connections on port 31999 |
| m30999| Thu Aug 09 13:07:10 [mongosMain] waiting for connections on port 30999 |
| m29000| Thu Aug 09 13:07:10 [conn3] build index done. scanned 0 total records. 0.005 secs |
| m30999| Thu Aug 09 13:07:10 [Balancer] config servers and shards contacted successfully |
| m30999| Thu Aug 09 13:07:10 [Balancer] balancer id: AMAZONA-J7UBCUV:30999 started at Aug 09 13:07:10 |
| m30999| Thu Aug 09 13:07:10 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m29000| Thu Aug 09 13:07:10 [conn3] build index config.chunks { _id: 1 } |
| m29000| Thu Aug 09 13:07:10 [initandlisten] connection accepted from 127.0.0.1:50852 #6 (6 connections now open) |
| m29000| Thu Aug 09 13:07:10 [conn3] build index done. scanned 0 total records. 0.002 secs |
| m29000| Thu Aug 09 13:07:10 [conn3] info: creating collection config.chunks on add index |
| m29000| Thu Aug 09 13:07:10 [conn3] build index config.chunks { ns: 1, min: 1 } |
| m29000| Thu Aug 09 13:07:10 [conn3] build index done. scanned 0 total records. 0.001 secs |
| m29000| Thu Aug 09 13:07:10 [conn5] build index config.mongos { _id: 1 } |
| m29000| Thu Aug 09 13:07:10 [conn5] build index done. scanned 0 total records. 0.001 secs |
| m29000| Thu Aug 09 13:07:10 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } |
| m29000| Thu Aug 09 13:07:10 [conn3] build index done. scanned 0 total records. 0.001 secs |
| m29000| Thu Aug 09 13:07:10 [conn3] build index config.chunks { ns: 1, lastmod: 1 } |
| m29000| Thu Aug 09 13:07:10 [conn3] build index done. scanned 0 total records. 0.001 secs |
| m30999| Thu Aug 09 13:07:10 warning: running with 1 config server should be done only for testing purposes and is not recommended for production |
| m29000| Thu Aug 09 13:07:10 [conn3] build index config.shards { _id: 1 } |
| m29000| Thu Aug 09 13:07:10 [conn3] build index done. scanned 0 total records. 0.001 secs |
| m29000| Thu Aug 09 13:07:10 [conn3] build index config.shards { host: 1 } |
| m29000| Thu Aug 09 13:07:10 [conn3] build index done. scanned 0 total records. 0.002 secs |
| m29000| Thu Aug 09 13:07:10 [conn3] info: creating collection config.shards on add index |
| m29000| Thu Aug 09 13:07:10 [conn3] build index config.lockpings { _id: 1 } |
| m29000| Thu Aug 09 13:07:10 [conn3] build index done. scanned 0 total records. 0.001 secs |
| m29000| Thu Aug 09 13:07:10 [conn6] build index config.locks { _id: 1 } |
| m29000| Thu Aug 09 13:07:10 [conn6] build index done. scanned 0 total records. 0.001 secs |
| m29000| Thu Aug 09 13:07:10 [conn3] build index config.lockpings { ping: 1 } |
| m29000| Thu Aug 09 13:07:10 [conn3] build index done. scanned 1 total records. 0.001 secs |
| m30999| Thu Aug 09 13:07:10 [Balancer] distributed lock 'balancer/AMAZONA-J7UBCUV:30999:1344517630:41' acquired, ts : 5023b5fef6943830424ba962 |
| m30999| Thu Aug 09 13:07:10 [Balancer] distributed lock 'balancer/AMAZONA-J7UBCUV:30999:1344517630:41' unlocked. |
| m30999| Thu Aug 09 13:07:10 [LockPinger] creating distributed lock ping thread for localhost:29000 and process AMAZONA-J7UBCUV:30999:1344517630:41 (sleeping for 30000ms) |
| m30999| Thu Aug 09 13:07:10 [mongosMain] connection accepted from 127.0.0.1:50847 #1 (1 connection now open) |
| ShardingTest undefined going to add shard : localhost:30000 |
| m29000| Thu Aug 09 13:07:10 [conn3] build index config.databases { _id: 1 } |
| m29000| Thu Aug 09 13:07:10 [conn3] build index done. scanned 0 total records. 0.03 secs |
| m30999| Thu Aug 09 13:07:10 [conn1] put [admin] on: config:localhost:29000 |
| m30000| Thu Aug 09 13:07:10 [initandlisten] connection accepted from 127.0.0.1:50853 #2 (2 connections now open) |
| m30999| Thu Aug 09 13:07:10 [conn1] couldn't find database [admin] in config db |
| m30999| Thu Aug 09 13:07:10 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } |
| ShardingTest undefined going to add shard : localhost:30001 |
| m30001| Thu Aug 09 13:07:10 [initandlisten] connection accepted from 127.0.0.1:50854 #2 (2 connections now open) |
| m30999| Thu Aug 09 13:07:10 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } |
| { "shardAdded" : "shard0000", "ok" : 1 } |
| ShardingTest undefined going to add shard : localhost:30002 |
| { "shardAdded" : "shard0001", "ok" : 1 } |
| m30002| Thu Aug 09 13:07:10 [initandlisten] connection accepted from 127.0.0.1:50855 #2 (2 connections now open) |
| m30999| Thu Aug 09 13:07:10 [conn1] going to add shard: { _id: "shard0002", host: "localhost:30002" } |
| |
| |
| **** unsharded **** |
| |
| |
| { "shardAdded" : "shard0002", "ok" : 1 } |
| m29000| Thu Aug 09 13:07:10 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0.273 secs |
| m30999| Thu Aug 09 13:07:10 [conn1] put [unsharded] on: shard0000:localhost:30000 |
| m30000| Thu Aug 09 13:07:10 [initandlisten] connection accepted from 127.0.0.1:50856 #3 (3 connections now open) |
| m30999| Thu Aug 09 13:07:10 [conn1] couldn't find database [unsharded] in config db |
| m30001| Thu Aug 09 13:07:10 [initandlisten] connection accepted from 127.0.0.1:50857 #3 (3 connections now open) |
| m30999| Thu Aug 09 13:07:10 [conn1] creating WriteBackListener for: localhost:30000 serverID: 5023b5fef6943830424ba961 |
| m30002| Thu Aug 09 13:07:10 [initandlisten] connection accepted from 127.0.0.1:50858 #3 (3 connections now open) |
| m30999| Thu Aug 09 13:07:10 [conn1] creating WriteBackListener for: localhost:30001 serverID: 5023b5fef6943830424ba961 |
| m30000| Thu Aug 09 13:07:10 [conn3] _DEBUG ReadContext db wasn't open, will try to open unsharded.name.files |
| m30000| Thu Aug 09 13:07:10 [conn3] opening db: unsharded |
| m30999| Thu Aug 09 13:07:10 [conn1] creating WriteBackListener for: localhost:30002 serverID: 5023b5fef6943830424ba961 |
| Thu Aug 09 13:07:11 shell: started program mongofiles.exe --port 30999 put mongod.exe --db unsharded |
2012-08-09 09:07:13 EDT | sh4240| connected to: 127.0.0.1:30999 |
| m30000| Thu Aug 09 13:07:11 [initandlisten] connection accepted from 127.0.0.1:50860 #4 (4 connections now open) |
| m30001| Thu Aug 09 13:07:11 [initandlisten] connection accepted from 127.0.0.1:50861 #4 (4 connections now open) |
| m30002| Thu Aug 09 13:07:11 [initandlisten] connection accepted from 127.0.0.1:50862 #4 (4 connections now open) |
| m30000| Thu Aug 09 13:07:11 [FileAllocator] allocating new datafile /data/db/test0/unsharded.ns, filling with zeroes... |
| m30000| Thu Aug 09 13:07:11 [FileAllocator] creating directory /data/db/test0/_tmp |
| m30000| Thu Aug 09 13:07:11 [FileAllocator] done allocating datafile /data/db/test0/unsharded.ns, size: 16MB, took 0.132 secs |
| m30000| Thu Aug 09 13:07:11 [FileAllocator] allocating new datafile /data/db/test0/unsharded.0, filling with zeroes... |
| m30000| Thu Aug 09 13:07:11 [FileAllocator] done allocating datafile /data/db/test0/unsharded.0, size: 64MB, took 0.528 secs |
| m30000| Thu Aug 09 13:07:11 [conn4] datafileheader::init initializing /data/db/test0/unsharded.0 n:0 |
| m30000| Thu Aug 09 13:07:11 [FileAllocator] allocating new datafile /data/db/test0/unsharded.1, filling with zeroes... |
| m30000| Thu Aug 09 13:07:11 [conn4] build index unsharded.fs.files { _id: 1 } |
| m30000| Thu Aug 09 13:07:11 [conn4] build index done. scanned 0 total records. 0.001 secs |
| m30000| Thu Aug 09 13:07:11 [conn4] info: creating collection unsharded.fs.files on add index |
| m30000| Thu Aug 09 13:07:11 [conn4] build index unsharded.fs.files { filename: 1 } |
| m30000| Thu Aug 09 13:07:11 [conn4] build index done. scanned 0 total records. 0.001 secs |
| m30000| Thu Aug 09 13:07:11 [conn4] insert unsharded.system.indexes keyUpdates:0 locks(micros) w:675083 670ms |
| m30000| Thu Aug 09 13:07:11 [conn4] build index unsharded.fs.chunks { _id: 1 } |
| m30000| Thu Aug 09 13:07:11 [conn4] build index done. scanned 0 total records. 0.001 secs |
| m30000| Thu Aug 09 13:07:11 [conn4] info: creating collection unsharded.fs.chunks on add index |
| m30000| Thu Aug 09 13:07:11 [conn4] build index unsharded.fs.chunks { files_id: 1, n: 1 } |
| m30000| Thu Aug 09 13:07:11 [conn4] build index done. scanned 0 total records. 0.001 secs |
| m30000| Thu Aug 09 13:07:12 [conn4] insert unsharded.fs.chunks keyUpdates:0 locks(micros) w:1553 608ms |
| m30000| Thu Aug 09 13:07:13 [FileAllocator] done allocating datafile /data/db/test0/unsharded.1, size: 128MB, took 1.081 secs |
| m30000| Thu Aug 09 13:07:13 [conn4] command unsharded.$cmd command: { filemd5: ObjectId('5023b5ff0192d23ae9a162cc'), root: "fs" } ntoreturn:1 keyUpdates:0 numYields: 80 locks(micros) r:74614 reslen:94 296ms |
| m30999| Thu Aug 09 13:07:11 [mongosMain] connection accepted from 127.0.0.1:50859 #2 (2 connections now open) |
| m30000| Thu Aug 09 13:07:13 [conn4] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:13 [conn4] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:13 [conn4] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:13 [conn4] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:13 [conn4] insert unsharded.fs.files keyUpdates:0 locks(micros) w:1023 561ms |
| sh4240| added file: { _id: ObjectId('5023b5ff0192d23ae9a162cc'), filename: "mongod.exe", chunkSize: 262144, uploadDate: new Date(1344517633118), md5: "4c933cd6fa8b299c8f87c96b06aaf38f", length: 20939264 } |
| m30999| Thu Aug 09 13:07:13 [conn2] end connection 127.0.0.1:50859 (1 connection now open) |
| fileObj: { |
| "_id" : ObjectId("5023b5ff0192d23ae9a162cc"), |
| "filename" : "mongod.exe", |
| "chunkSize" : 262144, |
| "uploadDate" : ISODate("2012-08-09T13:07:13.118Z"), |
| "md5" : "4c933cd6fa8b299c8f87c96b06aaf38f", |
| "length" : 20939264 |
| sh4240| done! |
| } |
| m30000| Thu Aug 09 13:07:14 [conn3] command unsharded.$cmd command: { filemd5: ObjectId('5023b5ff0192d23ae9a162cc') } ntoreturn:1 keyUpdates:0 numYields: 80 locks(micros) r:70637 reslen:94 296ms |
| filemd5 output: { "numChunks" : 80, "md5" : "4c933cd6fa8b299c8f87c96b06aaf38f", "ok" : 1 } |
| |
| |
| **** sharded db, unsharded collection **** |
| |
| |
| m30000| Thu Aug 09 13:07:14 [initandlisten] connection accepted from 127.0.0.1:50865 #5 (5 connections now open) |
| m29000| Thu Aug 09 13:07:14 [conn3] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:14 [conn3] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:14 [conn3] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:14 [initandlisten] connection accepted from 127.0.0.1:50866 #5 (5 connections now open) |
| m30002| Thu Aug 09 13:07:14 [initandlisten] connection accepted from 127.0.0.1:50867 #5 (5 connections now open) |
| m30999| Thu Aug 09 13:07:14 [conn1] put [sharded_db] on: shard0001:localhost:30001 |
| m30999| Thu Aug 09 13:07:14 [conn1] couldn't find database [sharded_db] in config db |
| m30001| Thu Aug 09 13:07:14 [conn3] _DEBUG ReadContext db wasn't open, will try to open sharded_db.name.files |
| m30001| Thu Aug 09 13:07:14 [conn3] opening db: sharded_db |
| m30999| Thu Aug 09 13:07:14 [conn1] enabling sharding on: sharded_db |
| Thu Aug 09 13:07:14 shell: started program mongofiles.exe --port 30999 put mongod.exe --db sharded_db |
2012-08-09 09:07:16 EDT | sh2892| connected to: 127.0.0.1:30999 |
| m30001| Thu Aug 09 13:07:14 [FileAllocator] allocating new datafile /data/db/test1/sharded_db.ns, filling with zeroes... |
| m30001| Thu Aug 09 13:07:14 [FileAllocator] creating directory /data/db/test1/_tmp |
| m30001| Thu Aug 09 13:07:14 [FileAllocator] done allocating datafile /data/db/test1/sharded_db.ns, size: 16MB, took 0.133 secs |
| m30001| Thu Aug 09 13:07:14 [FileAllocator] allocating new datafile /data/db/test1/sharded_db.0, filling with zeroes... |
| m30001| Thu Aug 09 13:07:15 [FileAllocator] done allocating datafile /data/db/test1/sharded_db.0, size: 64MB, took 0.552 secs |
| m30001| Thu Aug 09 13:07:15 [conn4] datafileheader::init initializing /data/db/test1/sharded_db.0 n:0 |
| m30001| Thu Aug 09 13:07:15 [FileAllocator] allocating new datafile /data/db/test1/sharded_db.1, filling with zeroes... |
| m30001| Thu Aug 09 13:07:15 [conn4] build index sharded_db.fs.files { _id: 1 } |
| m30001| Thu Aug 09 13:07:15 [conn4] build index done. scanned 0 total records. 0.001 secs |
| m30001| Thu Aug 09 13:07:15 [conn4] info: creating collection sharded_db.fs.files on add index |
| m30001| Thu Aug 09 13:07:15 [conn4] build index sharded_db.fs.files { filename: 1 } |
| m30001| Thu Aug 09 13:07:15 [conn4] build index done. scanned 0 total records. 0.001 secs |
| m30001| Thu Aug 09 13:07:15 [conn4] insert sharded_db.system.indexes keyUpdates:0 locks(micros) w:701418 702ms |
| m30001| Thu Aug 09 13:07:15 [conn4] build index sharded_db.fs.chunks { _id: 1 } |
| m30001| Thu Aug 09 13:07:15 [conn4] build index done. scanned 0 total records. 0.002 secs |
| m30001| Thu Aug 09 13:07:15 [conn4] info: creating collection sharded_db.fs.chunks on add index |
| m30001| Thu Aug 09 13:07:15 [conn4] build index sharded_db.fs.chunks { files_id: 1, n: 1 } |
| m30001| Thu Aug 09 13:07:15 [conn4] build index done. scanned 0 total records. 0.001 secs |
| m30001| Thu Aug 09 13:07:16 [conn4] insert sharded_db.fs.chunks keyUpdates:0 locks(micros) w:1606 639ms |
| m30001| Thu Aug 09 13:07:16 [FileAllocator] done allocating datafile /data/db/test1/sharded_db.1, size: 128MB, took 1.076 secs |
| m30001| Thu Aug 09 13:07:16 [conn4] command sharded_db.$cmd command: { filemd5: ObjectId('5023b602afb730f0e304ba19'), root: "fs" } ntoreturn:1 keyUpdates:0 numYields: 80 locks(micros) r:73116 reslen:94 296ms |
| m30999| Thu Aug 09 13:07:14 [mongosMain] connection accepted from 127.0.0.1:50868 #3 (2 connections now open) |
| m30001| Thu Aug 09 13:07:16 [conn4] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:16 [conn4] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:16 [conn4] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:16 [conn4] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:16 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:16 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:16 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:16 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:16 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:16 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:16 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:16 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:16 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:16 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:16 [conn5] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:16 [Balancer] distributed lock 'balancer/AMAZONA-J7UBCUV:30999:1344517630:41' acquired, ts : 5023b604f6943830424ba963 |
| sh2892| added file: { _id: ObjectId('5023b602afb730f0e304ba19'), filename: "mongod.exe", chunkSize: 262144, uploadDate: new Date(1344517636581), md5: "4c933cd6fa8b299c8f87c96b06aaf38f", length: 20939264 } |
2012-08-09 09:07:17 EDT | m30001| Thu Aug 09 13:07:17 [conn4] insert sharded_db.fs.files keyUpdates:0 locks(micros) w:1048 1326ms |
| m30999| Thu Aug 09 13:07:16 [Balancer] distributed lock 'balancer/AMAZONA-J7UBCUV:30999:1344517630:41' unlocked. |
| m30999| Thu Aug 09 13:07:17 [conn3] end connection 127.0.0.1:50868 (1 connection now open) |
| fileObj: { |
| "_id" : ObjectId("5023b602afb730f0e304ba19"), |
| "filename" : "mongod.exe", |
| "chunkSize" : 262144, |
| "uploadDate" : ISODate("2012-08-09T13:07:16.581Z"), |
| "md5" : "4c933cd6fa8b299c8f87c96b06aaf38f", |
| "length" : 20939264 |
| sh2892| done! |
| } |
| m30001| Thu Aug 09 13:07:18 [conn3] command sharded_db.$cmd command: { filemd5: ObjectId('5023b602afb730f0e304ba19') } ntoreturn:1 keyUpdates:0 numYields: 80 locks(micros) r:71167 reslen:94 296ms |
| filemd5 output: { "numChunks" : 80, "md5" : "4c933cd6fa8b299c8f87c96b06aaf38f", "ok" : 1 } |
| |
| |
| |
| |
| **** sharded collection on files_id **** |
| m29000| Thu Aug 09 13:07:18 [conn4] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:18 [conn4] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:18 [conn4] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:18 [conn1] put [sharded_files_id] on: shard0002:localhost:30002 |
| m30999| Thu Aug 09 13:07:18 [conn1] couldn't find database [sharded_files_id] in config db |
| m30002| Thu Aug 09 13:07:18 [conn5] _DEBUG ReadContext db wasn't open, will try to open sharded_files_id.system.namespaces |
| m30002| Thu Aug 09 13:07:18 [conn5] opening db: sharded_files_id |
| m30999| Thu Aug 09 13:07:18 [conn1] CMD: shardcollection: { shardcollection: "sharded_files_id.fs.chunks", key: { files_id: 1.0 } } |
| m30002| Thu Aug 09 13:07:18 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id.ns, filling with zeroes... |
| m30999| Thu Aug 09 13:07:18 [conn1] enable sharding on: sharded_files_id.fs.chunks with shard key: { files_id: 1.0 } |
| m30002| Thu Aug 09 13:07:18 [FileAllocator] creating directory /data/db/test2/_tmp |
| m30002| Thu Aug 09 13:07:18 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id.ns, size: 16MB, took 0.132 secs |
| m30002| Thu Aug 09 13:07:18 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id.0, filling with zeroes... |
| m30002| Thu Aug 09 13:07:19 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id.0, size: 64MB, took 0.56 secs |
| m30002| Thu Aug 09 13:07:19 [conn5] datafileheader::init initializing /data/db/test2/sharded_files_id.0 n:0 |
| m30002| Thu Aug 09 13:07:19 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id.1, filling with zeroes... |
| m30002| Thu Aug 09 13:07:19 [conn5] build index sharded_files_id.fs.chunks { _id: 1 } |
| m30002| Thu Aug 09 13:07:19 [conn5] build index done. scanned 0 total records. 0.001 secs |
| m30002| Thu Aug 09 13:07:19 [conn5] info: creating collection sharded_files_id.fs.chunks on add index |
| m30002| Thu Aug 09 13:07:19 [conn5] build index sharded_files_id.fs.chunks { files_id: 1.0 } |
| m30002| Thu Aug 09 13:07:19 [conn5] build index done. scanned 0 total records. 0.001 secs |
| m30002| Thu Aug 09 13:07:19 [conn5] insert sharded_files_id.system.indexes keyUpdates:0 locks(micros) w:707922 702ms |
| m30999| Thu Aug 09 13:07:18 [conn1] enabling sharding on: sharded_files_id |
| m29000| Thu Aug 09 13:07:19 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:19 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:19 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:19 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:19 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:19 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:19 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:19 [conn6] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:19 [conn1] ChunkManager: time to load chunks for sharded_files_id.fs.chunks: 5ms sequenceNumber: 2 version: 1|0||5023b607f6943830424ba964 based on: (empty) |
| m30999| Thu Aug 09 13:07:19 [conn1] DEV WARNING appendDate() called with a tiny (but nonzero) date |
| m29000| Thu Aug 09 13:07:19 [conn4] build index config.collections { _id: 1 } |
| m29000| Thu Aug 09 13:07:19 [conn4] build index done. scanned 0 total records. 0.002 secs |
| m30999| Thu Aug 09 13:07:19 [conn1] going to create 1 chunk(s) for: sharded_files_id.fs.chunks using new epoch 5023b607f6943830424ba964 |
| m30999| Thu Aug 09 13:07:19 [conn1] resetting shard version of sharded_files_id.fs.chunks on localhost:30000, version is zero |
| m30002| Thu Aug 09 13:07:19 [conn3] no current chunk manager found for this shard, will initialize |
| m30999| Thu Aug 09 13:07:19 [conn1] resetting shard version of sharded_files_id.fs.chunks on localhost:30001, version is zero |
| m29000| Thu Aug 09 13:07:19 [initandlisten] connection accepted from 127.0.0.1:50871 #7 (7 connections now open) |
| m30999| range.universal(): 1 |
| Thu Aug 09 13:07:19 shell: started program mongofiles.exe --port 30999 put mongod.exe --db sharded_files_id |
| m30999| Thu Aug 09 13:07:19 [mongosMain] connection accepted from 127.0.0.1:50874 #4 (2 connections now open) |
| m30999| Thu Aug 09 13:07:19 [conn4] resetting shard version of sharded_files_id.fs.chunks on localhost:30000, version is zero |
| m30999| Thu Aug 09 13:07:19 [conn4] resetting shard version of sharded_files_id.fs.chunks on localhost:30001, version is zero |
| m30002| Thu Aug 09 13:07:19 [conn4] build index sharded_files_id.fs.files { _id: 1 } |
| m30002| Thu Aug 09 13:07:19 [conn4] build index done. scanned 0 total records. 0.001 secs |
| m30002| Thu Aug 09 13:07:19 [conn4] info: creating collection sharded_files_id.fs.files on add index |
| m30002| Thu Aug 09 13:07:19 [conn4] build index sharded_files_id.fs.files { filename: 1 } |
| m30002| Thu Aug 09 13:07:19 [conn4] build index done. scanned 0 total records. 0.001 secs |
| m30002| Thu Aug 09 13:07:19 [conn4] build index sharded_files_id.fs.chunks { files_id: 1, n: 1 } |
| m30002| Thu Aug 09 13:07:19 [conn4] build index done. scanned 0 total records. 0.001 secs |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| sh3328| connected to: 127.0.0.1:30999 |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn4] insert sharded_files_id.fs.chunks keyUpdates:0 locks(micros) w:286337 280ms |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:19 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id.1, size: 128MB, took 1.087 secs |
| m30002| Thu Aug 09 13:07:20 [conn4] insert sharded_files_id.fs.chunks keyUpdates:0 locks(micros) w:220058 218ms |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
2012-08-09 09:07:22 EDT | m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:20 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn4] insert sharded_files_id.fs.chunks keyUpdates:0 locks(micros) w:1858414 1856ms |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:22 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:22 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:22 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:22 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:22 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:22 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:22 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
2012-08-09 09:07:23 EDT | m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m29000| Thu Aug 09 13:07:22 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:22 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:22 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:22 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:22 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:22 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:22 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:22 [conn6] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m29000| Thu Aug 09 13:07:22 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:22 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:22 [conn6] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m29000| Thu Aug 09 13:07:22 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:22 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:22 [conn5] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m29000| Thu Aug 09 13:07:22 [conn5] build index config.tags { _id: 1 } |
| m29000| Thu Aug 09 13:07:22 [conn5] build index done. scanned 0 total records. 0.002 secs |
| m29000| Thu Aug 09 13:07:22 [conn5] info: creating collection config.tags on add index |
| m29000| Thu Aug 09 13:07:22 [conn5] build index config.tags { ns: 1, min: 1 } |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m29000| Thu Aug 09 13:07:22 [conn5] build index done. scanned 0 total records. 0.002 secs |
| m30002| Thu Aug 09 13:07:22 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30999| Thu Aug 09 13:07:22 [Balancer] distributed lock 'balancer/AMAZONA-J7UBCUV:30999:1344517630:41' acquired, ts : 5023b60af6943830424ba965 |
| m30002| Thu Aug 09 13:07:22 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:23 [conn4] insert sharded_files_id.fs.chunks keyUpdates:0 locks(micros) w:1165102 1170ms |
| m30999| Thu Aug 09 13:07:22 [Balancer] distributed lock 'balancer/AMAZONA-J7UBCUV:30999:1344517630:41' unlocked. |
| m30002| Thu Aug 09 13:07:23 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:23 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:23 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:23 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:23 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:23 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:23 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:23 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:23 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:23 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:23 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:23 [conn5] request split points lookup for chunk sharded_files_id.fs.chunks { : MinKey } -->> { : MaxKey } |
| m30002| Thu Aug 09 13:07:23 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:23 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:23 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60725c36453d3dc4594') } |
| m30002| Thu Aug 09 13:07:24 [conn4] command sharded_files_id.$cmd command: { filemd5: ObjectId('5023b60725c36453d3dc4594'), root: "fs" } ntoreturn:1 keyUpdates:0 numYields: 80 locks(micros) r:94137 reslen:94 312ms |
| m30002| Thu Aug 09 13:07:24 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:24 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:24 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:24 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:24 [conn4] warning: we think data is in ram but system says no |
| sh3328| added file: { _id: ObjectId('5023b60725c36453d3dc4594'), filename: "mongod.exe", chunkSize: 262144, uploadDate: new Date(1344517644256), md5: "4c933cd6fa8b299c8f87c96b06aaf38f", length: 20939264 } |
| m30999| Thu Aug 09 13:07:24 [conn4] end connection 127.0.0.1:50874 (1 connection now open) |
| fileObj: { |
| "_id" : ObjectId("5023b60725c36453d3dc4594"), |
| "filename" : "mongod.exe", |
| "chunkSize" : 262144, |
| "uploadDate" : ISODate("2012-08-09T13:07:24.256Z"), |
| "md5" : "4c933cd6fa8b299c8f87c96b06aaf38f", |
| "length" : 20939264 |
| sh3328| done! |
| } |
| m30002| Thu Aug 09 13:07:24 [conn3] command sharded_files_id.$cmd command: { filemd5: ObjectId('5023b60725c36453d3dc4594') } ntoreturn:1 keyUpdates:0 numYields: 80 locks(micros) r:72684 reslen:94 296ms |
| filemd5 output: { "numChunks" : 80, "md5" : "4c933cd6fa8b299c8f87c96b06aaf38f", "ok" : 1 } |
| |
| |
| |
| |
| **** sharded collection on files_id,n **** |
| m29000| Thu Aug 09 13:07:24 [conn3] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:24 [conn3] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:24 [conn3] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:24 [conn1] put [sharded_files_id_n] on: shard0000:localhost:30000 |
| m30999| Thu Aug 09 13:07:24 [conn1] couldn't find database [sharded_files_id_n] in config db |
| m30000| Thu Aug 09 13:07:24 [conn5] _DEBUG ReadContext db wasn't open, will try to open sharded_files_id_n.system.namespaces |
| m30000| Thu Aug 09 13:07:24 [conn5] opening db: sharded_files_id_n |
| m30999| Thu Aug 09 13:07:24 [conn1] CMD: shardcollection: { shardcollection: "sharded_files_id_n.fs.chunks", key: { files_id: 1.0, n: 1.0 } } |
| m30000| Thu Aug 09 13:07:24 [FileAllocator] allocating new datafile /data/db/test0/sharded_files_id_n.ns, filling with zeroes... |
| m30999| Thu Aug 09 13:07:24 [conn1] enable sharding on: sharded_files_id_n.fs.chunks with shard key: { files_id: 1.0, n: 1.0 } |
| m30000| Thu Aug 09 13:07:25 [FileAllocator] done allocating datafile /data/db/test0/sharded_files_id_n.ns, size: 16MB, took 0.133 secs |
| m30000| Thu Aug 09 13:07:25 [FileAllocator] allocating new datafile /data/db/test0/sharded_files_id_n.0, filling with zeroes... |
| m30000| Thu Aug 09 13:07:25 [FileAllocator] done allocating datafile /data/db/test0/sharded_files_id_n.0, size: 64MB, took 0.531 secs |
| m30000| Thu Aug 09 13:07:25 [conn5] datafileheader::init initializing /data/db/test0/sharded_files_id_n.0 n:0 |
| m30000| Thu Aug 09 13:07:25 [FileAllocator] allocating new datafile /data/db/test0/sharded_files_id_n.1, filling with zeroes... |
| m30000| Thu Aug 09 13:07:25 [conn5] build index sharded_files_id_n.fs.chunks { _id: 1 } |
| m30000| Thu Aug 09 13:07:25 [conn5] build index done. scanned 0 total records. 0.001 secs |
| m30000| Thu Aug 09 13:07:25 [conn5] info: creating collection sharded_files_id_n.fs.chunks on add index |
| m30000| Thu Aug 09 13:07:25 [conn5] build index sharded_files_id_n.fs.chunks { files_id: 1.0, n: 1.0 } |
| m30000| Thu Aug 09 13:07:25 [conn5] build index done. scanned 0 total records. 0.001 secs |
| m30000| Thu Aug 09 13:07:25 [conn5] insert sharded_files_id_n.system.indexes keyUpdates:0 locks(micros) w:678569 670ms |
| m30999| Thu Aug 09 13:07:24 [conn1] enabling sharding on: sharded_files_id_n |
| m29000| Thu Aug 09 13:07:25 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:25 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:25 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:25 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:25 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:25 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:25 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:25 [conn6] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:25 [conn1] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 3ms sequenceNumber: 3 version: 1|0||5023b60df6943830424ba966 based on: (empty) |
| m29000| Thu Aug 09 13:07:25 [conn4] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:25 [conn4] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:25 [conn3] no current chunk manager found for this shard, will initialize |
| m29000| Thu Aug 09 13:07:25 [initandlisten] connection accepted from 127.0.0.1:50879 #8 (8 connections now open) |
| m30999| Thu Aug 09 13:07:25 [conn1] going to create 1 chunk(s) for: sharded_files_id_n.fs.chunks using new epoch 5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:25 [conn1] resetting shard version of sharded_files_id_n.fs.chunks on localhost:30001, version is zero |
| m30999| Thu Aug 09 13:07:25 [conn1] resetting shard version of sharded_files_id_n.fs.chunks on localhost:30002, version is zero |
| m30999| range.universal(): 1 |
| Thu Aug 09 13:07:25 shell: started program mongofiles.exe --port 30999 put mongod.exe --db sharded_files_id_n |
| m30999| Thu Aug 09 13:07:25 [mongosMain] connection accepted from 127.0.0.1:50880 #5 (2 connections now open) |
| m30000| Thu Aug 09 13:07:25 [conn4] build index sharded_files_id_n.fs.files { _id: 1 } |
| m30000| Thu Aug 09 13:07:25 [conn4] build index done. scanned 0 total records. 0.001 secs |
| m30000| Thu Aug 09 13:07:25 [conn4] info: creating collection sharded_files_id_n.fs.files on add index |
| m30000| Thu Aug 09 13:07:25 [conn4] build index sharded_files_id_n.fs.files { filename: 1 } |
| m30000| Thu Aug 09 13:07:25 [conn4] build index done. scanned 0 total records. 0.001 secs |
| m30999| Thu Aug 09 13:07:25 [conn5] resetting shard version of sharded_files_id_n.fs.chunks on localhost:30001, version is zero |
| m30999| Thu Aug 09 13:07:25 [conn5] resetting shard version of sharded_files_id_n.fs.chunks on localhost:30002, version is zero |
| m30000| Thu Aug 09 13:07:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey } |
| m30000| Thu Aug 09 13:07:26 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 } |
| m30000| Thu Aug 09 13:07:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey } |
| m30000| Thu Aug 09 13:07:26 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 } |
| m30000| Thu Aug 09 13:07:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey } |
| m30000| Thu Aug 09 13:07:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey } |
| m30000| Thu Aug 09 13:07:26 [conn5] warning: chunk is larger than 1024 bytes because of key { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 } |
| m29000| Thu Aug 09 13:07:26 [initandlisten] connection accepted from 127.0.0.1:50881 #9 (9 connections now open) |
| m30000| Thu Aug 09 13:07:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: MinKey, n: MinKey }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_MinKeyn_MinKey", configdb: "localhost:29000" } |
| m30000| Thu Aug 09 13:07:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30000| Thu Aug 09 13:07:26 [LockPinger] creating distributed lock ping thread for localhost:29000 and process AMAZONA-J7UBCUV:30000:1344517646:25828 (sleeping for 30000ms) |
| m29000| Thu Aug 09 13:07:26 [conn9] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:26 [conn9] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:26 [conn8] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:26 [conn8] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:26 [conn8] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:26 [conn8] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30000:1344517646:25828' acquired, ts : 5023b60e055e5556f68d0f36 |
| m30000| Thu Aug 09 13:07:26 [conn5] splitChunk accepted at version 1|0||5023b60df6943830424ba966 |
| m29000| Thu Aug 09 13:07:26 [conn9] info PageFaultRetryableSection will not yield, already locked upon reaching |
| sh2120| connected to: 127.0.0.1:30999 |
| m29000| Thu Aug 09 13:07:26 [conn8] build index config.changelog { _id: 1 } |
| m29000| Thu Aug 09 13:07:26 [conn8] build index done. scanned 0 total records. 0.001 secs |
| m30000| Thu Aug 09 13:07:26 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:26-0", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50865", time: new Date(1344517646050), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: MinKey, n: MinKey }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: MinKey, n: MinKey }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30000| Thu Aug 09 13:07:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30000:1344517646:25828' unlocked. |
| m29000| Thu Aug 09 13:07:26 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:26 [conn6] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:26 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 2ms sequenceNumber: 4 version: 1|2||5023b60df6943830424ba966 based on: 1|0||5023b60df6943830424ba966 |
| m30000| Thu Aug 09 13:07:26 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 0 } -->> { : MaxKey, : MaxKey } |
| m30000| Thu Aug 09 13:07:26 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 0 } -->> { : MaxKey, : MaxKey } |
| m30000| Thu Aug 09 13:07:26 [conn5] warning: chunk is larger than 524288 bytes because of key { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 } |
| m30000| Thu Aug 09 13:07:26 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_0", configdb: "localhost:29000" } |
| m30000| Thu Aug 09 13:07:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30000| Thu Aug 09 13:07:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30000:1344517646:25828' acquired, ts : 5023b60e055e5556f68d0f38 |
| m30000| Thu Aug 09 13:07:26 [conn5] splitChunk accepted at version 1|2||5023b60df6943830424ba966 |
| m30000| Thu Aug 09 13:07:26 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:26-1", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50865", time: new Date(1344517646097), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30000| Thu Aug 09 13:07:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30000:1344517646:25828' unlocked. |
| m30999| Thu Aug 09 13:07:26 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 1|0||000000000000000000000000 min: { files_id: MinKey, n: MinKey } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 } (splitThreshold 921) size: 786624 |
| m30999| Thu Aug 09 13:07:26 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 5ms sequenceNumber: 5 version: 1|4||5023b60df6943830424ba966 based on: 1|2||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:26 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 1|2||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 } (splitThreshold 471859) size: 1048832 (migrate suggested) |
| m30999| Thu Aug 09 13:07:26 [conn5] moving chunk (auto): ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 1|4||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 } max: { files_id: MaxKey, n: MaxKey } to: shard0001:localhost:30001 |
| m30000| Thu Aug 09 13:07:26 [conn5] received moveChunk request: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_3", configdb: "localhost:29000", secondaryThrottle: false } |
| m30000| Thu Aug 09 13:07:26 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30000| Thu Aug 09 13:07:26 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30000:1344517646:25828' acquired, ts : 5023b60e055e5556f68d0f39 |
| m30000| Thu Aug 09 13:07:26 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:26-2", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50865", time: new Date(1344517646128), what: "moveChunk.start", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", to: "shard0001" } } |
| m30000| Thu Aug 09 13:07:26 [conn5] moveChunk request accepted at version 1|4||5023b60df6943830424ba966 |
| m30000| Thu Aug 09 13:07:26 [conn5] moveChunk number of documents: 1 |
| m30001| Thu Aug 09 13:07:26 [initandlisten] connection accepted from 127.0.0.1:50882 #6 (6 connections now open) |
| m30001| Thu Aug 09 13:07:26 [conn6] opening db: admin |
| m30000| Thu Aug 09 13:07:26 [initandlisten] connection accepted from 127.0.0.1:50883 #6 (6 connections now open) |
| m30001| Thu Aug 09 13:07:26 [migrateThread] opening db: sharded_files_id_n |
| m30001| Thu Aug 09 13:07:26 [FileAllocator] allocating new datafile /data/db/test1/sharded_files_id_n.ns, filling with zeroes... |
| m30001| Thu Aug 09 13:07:26 [FileAllocator] done allocating datafile /data/db/test1/sharded_files_id_n.ns, size: 16MB, took 0.135 secs |
| m30001| Thu Aug 09 13:07:26 [FileAllocator] allocating new datafile /data/db/test1/sharded_files_id_n.0, filling with zeroes... |
| m30000| Thu Aug 09 13:07:26 [FileAllocator] done allocating datafile /data/db/test0/sharded_files_id_n.1, size: 128MB, took 1.075 secs |
| m30001| Thu Aug 09 13:07:26 [FileAllocator] done allocating datafile /data/db/test1/sharded_files_id_n.0, size: 64MB, took 0.537 secs |
| m30001| Thu Aug 09 13:07:26 [migrateThread] datafileheader::init initializing /data/db/test1/sharded_files_id_n.0 n:0 |
| m30001| Thu Aug 09 13:07:26 [FileAllocator] allocating new datafile /data/db/test1/sharded_files_id_n.1, filling with zeroes... |
| m30001| Thu Aug 09 13:07:26 [migrateThread] build index sharded_files_id_n.fs.chunks { _id: 1 } |
| m30001| Thu Aug 09 13:07:26 [migrateThread] build index done. scanned 0 total records. 0.001 secs |
| m30999| Thu Aug 09 13:07:26 [conn5] moving chunk ns: sharded_files_id_n.fs.chunks moving ( ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 1|4||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 } max: { files_id: MaxKey, n: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001 |
| m30001| Thu Aug 09 13:07:26 [migrateThread] build index sharded_files_id_n.fs.chunks { files_id: 1.0, n: 1.0 } |
| m30001| Thu Aug 09 13:07:26 [migrateThread] build index done. scanned 0 total records. 0.079 secs |
| m30000| Thu Aug 09 13:07:26 [conn6] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:26 [conn6] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:26 [migrateThread] info: creating collection sharded_files_id_n.fs.chunks on add index |
| m30001| Thu Aug 09 13:07:26 [migrateThread] migrate commit flushed to journal for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 } -> { files_id: MaxKey, n: MaxKey } |
| m30000| Thu Aug 09 13:07:27 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30000", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30000| Thu Aug 09 13:07:27 [conn5] moveChunk setting version to: 2|0||5023b60df6943830424ba966 |
| m30001| Thu Aug 09 13:07:26 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 } -> { files_id: MaxKey, n: MaxKey } |
| m30001| Thu Aug 09 13:07:27 [migrateThread] migrate commit flushed to journal for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 } -> { files_id: MaxKey, n: MaxKey } |
| m30001| Thu Aug 09 13:07:27 [migrateThread] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:27-0", server: "AMAZONA-J7UBCUV", clientAddr: ":27017", time: new Date(1344517647158), what: "moveChunk.to", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 5: 771, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 235 } } |
| m29000| Thu Aug 09 13:07:27 [initandlisten] connection accepted from 127.0.0.1:50886 #10 (10 connections now open) |
| m29000| Thu Aug 09 13:07:27 [conn10] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn10] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn10] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:27 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 } -> { files_id: MaxKey, n: MaxKey } |
| m30000| Thu Aug 09 13:07:27 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30000", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 } |
| m30000| Thu Aug 09 13:07:27 [conn5] moveChunk updating self version to: 2|1||5023b60df6943830424ba966 through { files_id: MinKey, n: MinKey } -> { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 } for collection 'sharded_files_id_n.fs.chunks' |
| m29000| Thu Aug 09 13:07:27 [conn9] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn9] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn9] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn9] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn9] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn9] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn9] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:27 [migrateThread] thread migrateThread stack usage was 30792 bytes, which is the most so far |
| m30000| Thu Aug 09 13:07:27 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:27-3", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50865", time: new Date(1344517647173), what: "moveChunk.commit", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0000", to: "shard0001" } } |
| m30000| Thu Aug 09 13:07:27 [conn5] doing delete inline |
| m30000| Thu Aug 09 13:07:27 [conn5] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:27 [conn5] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:27 [conn5] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:27 [conn5] moveChunk deleted: 1 |
| m29000| Thu Aug 09 13:07:27 [conn9] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn9] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:27 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30000:1344517646:25828' unlocked. |
| m30000| Thu Aug 09 13:07:27 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:27-4", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50865", time: new Date(1344517647173), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 6: 1, step2 of 6: 9, step3 of 6: 3, step4 of 6: 999, step5 of 6: 36, step6 of 6: 3 } } |
| m30000| Thu Aug 09 13:07:27 [conn5] command admin.$cmd command: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_3", configdb: "localhost:29000", secondaryThrottle: false } ntoreturn:1 keyUpdates:0 locks(micros) r:711 w:3041 reslen:37 1045ms |
| m29000| Thu Aug 09 13:07:27 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn6] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:27 [conn5] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:27 [conn4] no current chunk manager found for this shard, will initialize |
| m29000| Thu Aug 09 13:07:27 [conn10] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn10] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn10] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:27 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 3 } -->> { : MaxKey, : MaxKey } |
| m30001| Thu Aug 09 13:07:27 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 3 } -->> { : MaxKey, : MaxKey } |
| m29000| Thu Aug 09 13:07:27 [initandlisten] connection accepted from 127.0.0.1:50887 #11 (11 connections now open) |
| m30001| Thu Aug 09 13:07:27 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_3", configdb: "localhost:29000" } |
| m30001| Thu Aug 09 13:07:27 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:27 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 7ms sequenceNumber: 6 version: 2|1||5023b60df6943830424ba966 based on: 1|4||5023b60df6943830424ba966 |
2012-08-09 09:07:28 EDT | m29000| Thu Aug 09 13:07:27 [conn10] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn10] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn10] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:27 [conn10] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:27 [FileAllocator] done allocating datafile /data/db/test1/sharded_files_id_n.1, size: 128MB, took 1.071 secs |
| m29000| Thu Aug 09 13:07:28 [conn10] update config.lockpings query: { _id: "AMAZONA-J7UBCUV:30001:1344517647:26113" } update: { $set: { ping: new Date(1344517647236) } } nscanned:0 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) w:1132 1326ms |
| m29000| Thu Aug 09 13:07:28 [conn11] update config.locks query: { _id: "sharded_files_id_n.fs.chunks", state: 0, ts: ObjectId('5023b60e055e5556f68d0f39') } update: { $set: { state: 1, who: "AMAZONA-J7UBCUV:30001:1344517647:26113:conn5:10008", process: "AMAZONA-J7UBCUV:30001:1344517647:26113", when: new Date(1344517647236), why: "split-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }", ts: ObjectId('5023b60f5829ae6a2fa0fe18') } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) w:2521 1326ms |
| m30001| Thu Aug 09 13:07:27 [LockPinger] creating distributed lock ping thread for localhost:29000 and process AMAZONA-J7UBCUV:30001:1344517647:26113 (sleeping for 30000ms) |
| m30001| Thu Aug 09 13:07:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30001:1344517647:26113' acquired, ts : 5023b60f5829ae6a2fa0fe18 |
| m30001| Thu Aug 09 13:07:28 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:28-1", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50866", time: new Date(1344517648577), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30001| Thu Aug 09 13:07:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30001:1344517647:26113' unlocked. |
| m30001| Thu Aug 09 13:07:28 [conn5] command admin.$cmd command: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_3", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 reslen:119 1372ms |
| m30001| Thu Aug 09 13:07:28 [conn5] splitChunk accepted at version 2|0||5023b60df6943830424ba966 |
| m29000| Thu Aug 09 13:07:28 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:28 [conn6] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:28 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 15ms sequenceNumber: 7 version: 2|3||5023b60df6943830424ba966 based on: 2|1||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:28 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 2|0||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } (splitThreshold 943718) size: 1048832 (migrate suggested) |
| m30999| Thu Aug 09 13:07:28 [conn5] moving chunk (auto): ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } max: { files_id: MaxKey, n: MaxKey } to: shard0002:localhost:30002 |
| m30001| Thu Aug 09 13:07:28 [conn5] received moveChunk request: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_6", configdb: "localhost:29000", secondaryThrottle: false } |
| m30001| Thu Aug 09 13:07:28 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30001| Thu Aug 09 13:07:28 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30001:1344517647:26113' acquired, ts : 5023b6105829ae6a2fa0fe19 |
| m30001| Thu Aug 09 13:07:28 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:28-2", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50866", time: new Date(1344517648624), what: "moveChunk.start", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", to: "shard0002" } } |
| m30001| Thu Aug 09 13:07:28 [conn5] moveChunk request accepted at version 2|3||5023b60df6943830424ba966 |
| m30001| Thu Aug 09 13:07:28 [conn5] moveChunk number of documents: 1 |
| m30002| Thu Aug 09 13:07:28 [initandlisten] connection accepted from 127.0.0.1:50888 #6 (6 connections now open) |
| m30002| Thu Aug 09 13:07:28 [conn6] opening db: admin |
| m30001| Thu Aug 09 13:07:28 [initandlisten] connection accepted from 127.0.0.1:50889 #7 (7 connections now open) |
| m30002| Thu Aug 09 13:07:28 [migrateThread] opening db: sharded_files_id_n |
| m30002| Thu Aug 09 13:07:28 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id_n.ns, filling with zeroes... |
| m29000| Thu Aug 09 13:07:28 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:28 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:28 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:28 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:28 [conn5] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:28 [initandlisten] connection accepted from 127.0.0.1:50890 #8 (8 connections now open) |
| m30002| Thu Aug 09 13:07:28 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id_n.ns, size: 16MB, took 0.133 secs |
| m30999| Thu Aug 09 13:07:28 [Balancer] distributed lock 'balancer/AMAZONA-J7UBCUV:30999:1344517630:41' acquired, ts : 5023b610f6943830424ba967 |
| m30002| Thu Aug 09 13:07:28 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id_n.0, filling with zeroes... |
| m30002| Thu Aug 09 13:07:29 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id_n.0, size: 64MB, took 0.532 secs |
| m30002| Thu Aug 09 13:07:29 [migrateThread] datafileheader::init initializing /data/db/test2/sharded_files_id_n.0 n:0 |
| m30002| Thu Aug 09 13:07:29 [FileAllocator] allocating new datafile /data/db/test2/sharded_files_id_n.1, filling with zeroes... |
| m30002| Thu Aug 09 13:07:29 [migrateThread] build index sharded_files_id_n.fs.chunks { _id: 1 } |
| m30002| Thu Aug 09 13:07:29 [migrateThread] build index done. scanned 0 total records. 0.001 secs |
| m30002| Thu Aug 09 13:07:29 [migrateThread] info: creating collection sharded_files_id_n.fs.chunks on add index |
| m30999| Thu Aug 09 13:07:28 [conn5] moving chunk ns: sharded_files_id_n.fs.chunks moving ( ns:sharded_files_id_n.fs.chunks at: shard0001:localhost:30001 lastmod: 2|3||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } max: { files_id: MaxKey, n: MaxKey }) shard0001:localhost:30001 -> shard0002:localhost:30002 |
| m30002| Thu Aug 09 13:07:29 [conn5] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:375 reslen:2293 530ms |
| m29000| Thu Aug 09 13:07:29 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:29 [conn6] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:29 [migrateThread] build index sharded_files_id_n.fs.chunks { files_id: 1.0, n: 1.0 } |
| m30999| Thu Aug 09 13:07:29 [Balancer] ns: sharded_files_id_n.fs.chunks going to move { _id: "sharded_files_id_n.fs.chunks-files_id_MinKeyn_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('5023b60df6943830424ba966'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: MinKey, n: MinKey }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] |
| m30000| Thu Aug 09 13:07:29 [conn5] received moveChunk request: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { files_id: MinKey, n: MinKey }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_MinKeyn_MinKey", configdb: "localhost:29000", secondaryThrottle: false } |
| m30000| Thu Aug 09 13:07:29 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m29000| Thu Aug 09 13:07:29 [initandlisten] connection accepted from 127.0.0.1:50892 #12 (12 connections now open) |
| m30000| Thu Aug 09 13:07:29 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:29-5", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50865", time: new Date(1344517649326), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: MinKey, n: MinKey }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 }, step1 of 6: 1, note: "aborted" } } |
| m30999| Thu Aug 09 13:07:29 [Balancer] moveChunk result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30001:1344517647:26113", state: 2, ts: ObjectId('5023b6105829ae6a2fa0fe19'), when: new Date(1344517648624), who: "AMAZONA-J7UBCUV:30001:1344517647:26113:conn5:10008", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection metadata could not be locked with lock migrate-{ files_id: MinKey, n: MinKey }", ok: 0.0 } |
| m30999| Thu Aug 09 13:07:29 [Balancer] balancer move failed: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30001:1344517647:26113", state: 2, ts: ObjectId('5023b6105829ae6a2fa0fe19'), when: new Date(1344517648624), who: "AMAZONA-J7UBCUV:30001:1344517647:26113:conn5:10008", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection metadata could not be locked with lock migrate-{ files_id: MinKey, n: MinKey }", ok: 0.0 } from: shard0000 to: shard0002 chunk: min: { files_id: MinKey, n: MinKey } max: { files_id: MinKey, n: MinKey } |
| m30999| Thu Aug 09 13:07:29 [Balancer] distributed lock 'balancer/AMAZONA-J7UBCUV:30999:1344517630:41' unlocked. |
| m30001| Thu Aug 09 13:07:29 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30002| Thu Aug 09 13:07:30 [migrateThread] build index done. scanned 0 total records. 0.89 secs |
| m30999| Thu Aug 09 13:07:29 [Balancer] moving chunk ns: sharded_files_id_n.fs.chunks moving ( ns:sharded_files_id_n.fs.chunks at: shard0000:localhost:30000 lastmod: 2|1||000000000000000000000000 min: { files_id: MinKey, n: MinKey } max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 |
| m30001| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:30 [migrateThread] migrate commit flushed to journal for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } -> { files_id: MaxKey, n: MaxKey } |
| m30002| Thu Aug 09 13:07:30 [FileAllocator] done allocating datafile /data/db/test2/sharded_files_id_n.1, size: 128MB, took 1.067 secs |
| m30001| Thu Aug 09 13:07:30 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30001| Thu Aug 09 13:07:30 [conn5] moveChunk setting version to: 3|0||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:30 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } -> { files_id: MaxKey, n: MaxKey } |
| m30002| Thu Aug 09 13:07:30 [migrateThread] migrate commit flushed to journal for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } -> { files_id: MaxKey, n: MaxKey } |
| m30002| Thu Aug 09 13:07:30 [migrateThread] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:30-0", server: "AMAZONA-J7UBCUV", clientAddr: ":27017", time: new Date(1344517650683), what: "moveChunk.to", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 5: 1574, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 460 } } |
| m29000| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:30 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } -> { files_id: MaxKey, n: MaxKey } |
| m30001| Thu Aug 09 13:07:30 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30001", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, shardKeyPattern: { files_id: 1, n: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 262206, catchup: 0, steady: 0 }, ok: 1.0 } |
| m30001| Thu Aug 09 13:07:30 [conn5] moveChunk updating self version to: 3|1||5023b60df6943830424ba966 through { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 3 } -> { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } for collection 'sharded_files_id_n.fs.chunks' |
| m29000| Thu Aug 09 13:07:30 [conn11] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn11] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn11] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn11] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn11] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn11] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn11] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:30 [migrateThread] thread migrateThread stack usage was 30792 bytes, which is the most so far |
| m30001| Thu Aug 09 13:07:30 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:30-3", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50866", time: new Date(1344517650699), what: "moveChunk.commit", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0001", to: "shard0002" } } |
| m30001| Thu Aug 09 13:07:30 [conn5] doing delete inline |
| m30001| Thu Aug 09 13:07:30 [conn5] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:30 [conn5] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:30 [conn5] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:30 [conn5] moveChunk deleted: 1 |
| m29000| Thu Aug 09 13:07:30 [conn11] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn11] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30001:1344517647:26113' unlocked. |
| m30001| Thu Aug 09 13:07:30 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:30-4", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50866", time: new Date(1344517650699), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, step1 of 6: 1, step2 of 6: 9, step3 of 6: 3, step4 of 6: 2028, step5 of 6: 37, step6 of 6: 3 } } |
| m30001| Thu Aug 09 13:07:30 [conn5] command admin.$cmd command: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_6", configdb: "localhost:29000", secondaryThrottle: false } ntoreturn:1 keyUpdates:0 locks(micros) r:754 w:2917 reslen:37 2074ms |
| m30001| Thu Aug 09 13:07:30 [conn5] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:30 [conn4] no current chunk manager found for this shard, will initialize |
| m29000| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:30 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 6 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:30 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 6 } -->> { : MaxKey, : MaxKey } |
| m29000| Thu Aug 09 13:07:30 [initandlisten] connection accepted from 127.0.0.1:50894 #13 (13 connections now open) |
| m30002| Thu Aug 09 13:07:30 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_6", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:30 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:30 [LockPinger] creating distributed lock ping thread for localhost:29000 and process AMAZONA-J7UBCUV:30002:1344517650:29293 (sleeping for 30000ms) |
| m29000| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:30 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 9ms sequenceNumber: 8 version: 3|1||5023b60df6943830424ba966 based on: 2|3||5023b60df6943830424ba966 |
2012-08-09 09:07:32 EDT | m29000| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn7] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b6124ce9af875b837bc6 |
| m30002| Thu Aug 09 13:07:30 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:30-1", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517650808), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:30 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30002| Thu Aug 09 13:07:30 [conn5] splitChunk accepted at version 3|0||5023b60df6943830424ba966 |
| m29000| Thu Aug 09 13:07:30 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:30 [conn5] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:30 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 12ms sequenceNumber: 9 version: 3|3||5023b60df6943830424ba966 based on: 3|1||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:32 [conn4] insert sharded_files_id_n.fs.chunks keyUpdates:0 locks(micros) w:1194 1700ms |
| m30002| Thu Aug 09 13:07:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 9 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 9 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 9 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:32 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 9 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:32 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 12 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_9", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:32 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m29000| Thu Aug 09 13:07:32 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:32 [conn5] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:32 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b6144ce9af875b837bca |
| m30999| Thu Aug 09 13:07:30 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|0||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 } (splitThreshold 943718) size: 1048832 (migrate suggested) |
| m30002| Thu Aug 09 13:07:32 [conn5] splitChunk accepted at version 3|3||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:32 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:32-2", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517652602), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 12 }, lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 12 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:32 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30002| Thu Aug 09 13:07:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 12 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 12 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:32 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 12 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:32 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 12 } -->> { : MaxKey, : MaxKey } |
| m30999| Thu Aug 09 13:07:32 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 17ms sequenceNumber: 10 version: 3|5||5023b60df6943830424ba966 based on: 3|3||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:32 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|3||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 12 } (splitThreshold 943718) size: 1048832 (migrate suggested) |
| m30002| Thu Aug 09 13:07:32 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:32 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b6144ce9af875b837bce |
| m30002| Thu Aug 09 13:07:32 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 12 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 15 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_12", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:32 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:32-3", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517652727), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 12 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 12 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 15 }, lastmod: Timestamp 3000|6, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 15 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|7, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:32 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30002| Thu Aug 09 13:07:32 [conn5] splitChunk accepted at version 3|5||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:32 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 18ms sequenceNumber: 11 version: 3|7||5023b60df6943830424ba966 based on: 3|5||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:33 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:33 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:33 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:33 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:33 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 15 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 15 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 15 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 15 } -->> { : MaxKey, : MaxKey } |
| m30999| Thu Aug 09 13:07:32 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|5||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 12 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 15 } (splitThreshold 943718) size: 1048832 (migrate suggested) |
| m30002| Thu Aug 09 13:07:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b6154ce9af875b837bd2 |
| m30002| Thu Aug 09 13:07:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 15 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 18 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_15", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:33 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:33-4", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517653398), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 15 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 15 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 18 }, lastmod: Timestamp 3000|8, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 18 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|9, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30002| Thu Aug 09 13:07:33 [conn5] splitChunk accepted at version 3|7||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:33 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 14ms sequenceNumber: 12 version: 3|9||5023b60df6943830424ba966 based on: 3|7||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 18 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 18 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:33 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 18 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:33 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 18 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:33 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 18 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 21 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_18", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:33 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b6154ce9af875b837bd6 |
| m30002| Thu Aug 09 13:07:33 [conn5] splitChunk accepted at version 3|9||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:33 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|7||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 15 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 18 } (splitThreshold 943718) size: 1048832 (migrate suggested) |
2012-08-09 09:07:34 EDT | m30002| Thu Aug 09 13:07:33 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:33-5", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517653522), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 18 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 18 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 21 }, lastmod: Timestamp 3000|10, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 21 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:33 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30999| Thu Aug 09 13:07:33 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 25ms sequenceNumber: 13 version: 3|11||5023b60df6943830424ba966 based on: 3|9||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:34 [conn4] insert sharded_files_id_n.fs.chunks keyUpdates:0 locks(micros) w:1032988 1029ms |
| m30002| Thu Aug 09 13:07:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 21 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 21 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 21 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 21 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 21 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 24 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_21", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b6164ce9af875b837bda |
| m30002| Thu Aug 09 13:07:34 [conn5] splitChunk accepted at version 3|11||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:34 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:34-6", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517654630), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 21 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 21 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 24 }, lastmod: Timestamp 3000|12, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 24 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|13, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30999| Thu Aug 09 13:07:33 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|9||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 18 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 21 } (splitThreshold 943718) size: 1049492 (migrate suggested) |
| m30002| Thu Aug 09 13:07:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30999| Thu Aug 09 13:07:34 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 30ms sequenceNumber: 14 version: 3|13||5023b60df6943830424ba966 based on: 3|11||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 24 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 24 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:34 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 24 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:34 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 24 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:34 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 24 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_24", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:34 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b6164ce9af875b837bde |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn7] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:34 [conn5] splitChunk accepted at version 3|13||5023b60df6943830424ba966 |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn13] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:34 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|11||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 21 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 24 } (splitThreshold 943718) size: 1049384 (migrate suggested) |
| m29000| Thu Aug 09 13:07:34 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn7] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:34 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:34-7", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517654739), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 24 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 24 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, lastmod: Timestamp 3000|14, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|15, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:34 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m29000| Thu Aug 09 13:07:34 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:34 [conn5] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:34 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 14ms sequenceNumber: 15 version: 3|15||5023b60df6943830424ba966 based on: 3|13||5023b60df6943830424ba966 |
| m29000| Thu Aug 09 13:07:35 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:35 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:35 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:35 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:35 [conn6] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:34 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|13||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 24 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 } (splitThreshold 943718) size: 1049312 (migrate suggested) |
| m29000| Thu Aug 09 13:07:35 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:35 [conn5] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:35 [Balancer] ns: sharded_files_id_n.fs.chunks going to move { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_6", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('5023b60df6943830424ba966'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, shard: "shard0002" } from: shard0002 to: shard0001 tag [] |
| m30999| Thu Aug 09 13:07:35 [Balancer] distributed lock 'balancer/AMAZONA-J7UBCUV:30999:1344517630:41' acquired, ts : 5023b617f6943830424ba968 |
| m30002| Thu Aug 09 13:07:35 [conn5] received moveChunk request: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_6", configdb: "localhost:29000", secondaryThrottle: false } |
| m30002| Thu Aug 09 13:07:35 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:35 [Balancer] moving chunk ns: sharded_files_id_n.fs.chunks moving ( ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|2||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }) shard0002:localhost:30002 -> shard0001:localhost:30001 |
2012-08-09 09:07:37 EDT | m30002| Thu Aug 09 13:07:35 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b6174ce9af875b837be0 |
| m30002| Thu Aug 09 13:07:35 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:35-8", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517655488), what: "moveChunk.start", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, from: "shard0002", to: "shard0001" } } |
| m30002| Thu Aug 09 13:07:35 [conn5] moveChunk number of documents: 3 |
| m30001| Thu Aug 09 13:07:35 [migrateThread] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:35 [migrateThread] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:35 [migrateThread] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:35 [migrateThread] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:35 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } -> { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 } |
| m30002| Thu Aug 09 13:07:35 [conn5] moveChunk request accepted at version 3|15||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:36 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30002", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, shardKeyPattern: { files_id: 1, n: 1 }, state: "catchup", counts: { cloned: 3, clonedBytes: 786618, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30001| Thu Aug 09 13:07:37 [migrateThread] migrate commit flushed to journal for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } -> { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 } |
| m30002| Thu Aug 09 13:07:37 [conn4] insert sharded_files_id_n.fs.chunks keyUpdates:0 locks(micros) w:1439 3151ms |
| m30002| Thu Aug 09 13:07:37 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30002", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, shardKeyPattern: { files_id: 1, n: 1 }, state: "catchup", counts: { cloned: 3, clonedBytes: 786618, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30002| Thu Aug 09 13:07:37 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:37 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:37 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:37 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:37 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 30 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:37 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m29000| Thu Aug 09 13:07:37 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:37 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:37 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:37 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:37 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:37 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:37 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:37 [initandlisten] connection accepted from 127.0.0.1:50901 #14 (14 connections now open) |
| m30002| Thu Aug 09 13:07:37 [initandlisten] connection accepted from 127.0.0.1:50899 #7 (7 connections now open) |
| m30002| Thu Aug 09 13:07:37 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:37 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:37 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 31 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:37 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:37 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 30 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:37 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:37 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:37 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 32 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30999| Thu Aug 09 13:07:37 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 31 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Thu Aug 09 13:07:37 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 32 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Thu Aug 09 13:07:37 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 33 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Thu Aug 09 13:07:38 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 11ms sequenceNumber: 16 version: 3|15||5023b60df6943830424ba966 based on: 3|15||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: chunk manager reload forced for collection 'sharded_files_id_n.fs.chunks', config version is 3|15||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 34 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m29000| Thu Aug 09 13:07:38 [conn6] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:37 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:37 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:37 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:37 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 33 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:37 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 34 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 35 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 35 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m29000| Thu Aug 09 13:07:38 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:38 [conn6] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 36 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn4] insert sharded_files_id_n.fs.chunks keyUpdates:0 locks(micros) w:1430 109ms |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 37 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 36 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 37 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 38 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 39 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 38 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Thu Aug 09 13:07:38 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 11ms sequenceNumber: 17 version: 3|15||5023b60df6943830424ba966 based on: 3|15||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: chunk manager reload forced for collection 'sharded_files_id_n.fs.chunks', config version is 3|15||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 39 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 40 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 41 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 40 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 42 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 41 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 43 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 42 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 44 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 43 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Thu Aug 09 13:07:38 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 11ms sequenceNumber: 18 version: 3|15||5023b60df6943830424ba966 based on: 3|15||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: chunk manager reload forced for collection 'sharded_files_id_n.fs.chunks', config version is 3|15||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 45 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 44 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn4] insert sharded_files_id_n.fs.chunks keyUpdates:0 locks(micros) w:1388 124ms |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 46 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 45 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 47 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 46 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 48 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 47 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 48 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30999| Thu Aug 09 13:07:38 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 16ms sequenceNumber: 19 version: 3|15||5023b60df6943830424ba966 based on: 3|15||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: chunk manager reload forced for collection 'sharded_files_id_n.fs.chunks', config version is 3|15||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:38 [conn5] moveChunk data transfer progress: { active: true, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30002", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, shardKeyPattern: { files_id: 1, n: 1 }, state: "steady", counts: { cloned: 3, clonedBytes: 786618, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 |
| m30002| Thu Aug 09 13:07:38 [conn5] moveChunk setting version to: 4|0||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30001| Thu Aug 09 13:07:38 [migrateThread] migrate commit succeeded flushing to secondaries for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } -> { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 } |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30001| Thu Aug 09 13:07:38 [migrateThread] migrate commit flushed to journal for 'sharded_files_id_n.fs.chunks' { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 } -> { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 } |
| m30001| Thu Aug 09 13:07:38 [migrateThread] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:38-5", server: "AMAZONA-J7UBCUV", clientAddr: ":27017", time: new Date(1344517658546), what: "moveChunk.to", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, step1 of 5: 2, step2 of 5: 0, step3 of 5: 16, step4 of 5: 0, step5 of 5: 3026 } } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m29000| Thu Aug 09 13:07:38 [conn10] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:38 [conn10] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30000| Thu Aug 09 13:07:38 [initandlisten] connection accepted from 127.0.0.1:50903 #7 (7 connections now open) |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30001| Thu Aug 09 13:07:38 [initandlisten] connection accepted from 127.0.0.1:50904 #9 (9 connections now open) |
| m30002| Thu Aug 09 13:07:38 [initandlisten] connection accepted from 127.0.0.1:50905 #8 (8 connections now open) |
| m30002| Thu Aug 09 13:07:38 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "sharded_files_id_n.fs.chunks", from: "localhost:30002", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, shardKeyPattern: { files_id: 1, n: 1 }, state: "done", counts: { cloned: 3, clonedBytes: 786618, catchup: 0, steady: 0 }, ok: 1.0 } |
| m30002| Thu Aug 09 13:07:38 [conn5] moveChunk updating self version to: 4|1||5023b60df6943830424ba966 through { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 } -> { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 12 } for collection 'sharded_files_id_n.fs.chunks' |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m29000| Thu Aug 09 13:07:38 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:38 [conn13] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
| m30002| Thu Aug 09 13:07:38 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:38-9", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517658561), what: "moveChunk.commit", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, from: "shard0002", to: "shard0001" } } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:38 [conn5] doing delete inline |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn5] moveChunk deleted: 3 |
| m30002| Thu Aug 09 13:07:38 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30999| Thu Aug 09 13:07:38 [conn5] warning: splitChunk failed - cmd: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } result: { who: { _id: "sharded_files_id_n.fs.chunks", process: "AMAZONA-J7UBCUV:30002:1344517650:29293", state: 2, ts: ObjectId('5023b6174ce9af875b837be0'), when: new Date(1344517655441), who: "AMAZONA-J7UBCUV:30002:1344517650:29293:conn5:28211", why: "migrate-{ files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }" }, errmsg: "the collection's metadata lock is taken", ok: 0.0 } |
2012-08-09 09:07:39 EDT | m30002| Thu Aug 09 13:07:38 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:38-10", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517658561), what: "moveChunk.from", ns: "sharded_files_id_n.fs.chunks", details: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, step1 of 6: 1, step2 of 6: 47, step3 of 6: 1, step4 of 6: 3030, step5 of 6: 36, step6 of 6: 5 } } |
| m30002| Thu Aug 09 13:07:38 [conn5] command admin.$cmd command: { moveChunk: "sharded_files_id_n.fs.chunks", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 6 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 9 }, maxChunkSizeBytes: 1048576, shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_6", configdb: "localhost:29000", secondaryThrottle: false } ntoreturn:1 keyUpdates:0 locks(micros) r:778 w:4300 reslen:37 3120ms |
| m30002| Thu Aug 09 13:07:38 [conn7] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m29000| Thu Aug 09 13:07:38 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:38 [conn6] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:38 [conn7] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:38 [conn7] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:38 [conn7] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b61a4ce9af875b837bf6 |
| m30002| Thu Aug 09 13:07:38 [conn7] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 27 } -->> { : MaxKey, : MaxKey } |
| m29000| Thu Aug 09 13:07:38 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:38 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:38 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:38 [initandlisten] connection accepted from 127.0.0.1:50906 #15 (15 connections now open) |
| m30002| Thu Aug 09 13:07:38 [conn7] splitChunk accepted at version 4|1||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:38 [Balancer] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 34ms sequenceNumber: 20 version: 4|1||5023b60df6943830424ba966 based on: 3|15||5023b60df6943830424ba966 |
| m29000| Thu Aug 09 13:07:39 [conn13] command config.$cmd command: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('5023b60df6943830424ba966'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 }, shard: "shard0002" }, o2: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_49", lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('5023b60df6943830424ba966'), ns: "sharded_files_id_n.fs.chunks", min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 }, max: { files_id: MaxKey, n: MaxKey }, shard: "shard0002" }, o2: { _id: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_49" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "sharded_files_id_n.fs.chunks" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 4000|1 } } ] } ntoreturn:1 keyUpdates:0 locks(micros) W:1303933 reslen:72 1310ms |
| m29000| Thu Aug 09 13:07:39 [conn6] update config.locks query: { _id: "balancer", ts: ObjectId('5023b617f6943830424ba968') } update: { $set: { state: 0 } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) w:3439 1310ms |
| m30002| Thu Aug 09 13:07:39 [conn7] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:39-11", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50899", time: new Date(1344517659918), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 3000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 }, lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m29000| Thu Aug 09 13:07:39 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:39 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:39 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:39 [conn5] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:39 [conn7] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30002| Thu Aug 09 13:07:39 [conn7] command admin.$cmd command: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_27", configdb: "localhost:29000" } ntoreturn:1 keyUpdates:0 reslen:119 1341ms |
| m30999| Thu Aug 09 13:07:39 [Balancer] distributed lock 'balancer/AMAZONA-J7UBCUV:30999:1344517630:41' unlocked. |
| m30999| Thu Aug 09 13:07:39 [WriteBackListener-localhost:30002] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 1347ms sequenceNumber: 21 version: 4|3||5023b60df6943830424ba966 based on: 3|15||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:39 [WriteBackListener-localhost:30002] GLE is { singleShard: "localhost:30002", n: 0, connectionId: 8, err: null, ok: 1.0 } |
| m29000| Thu Aug 09 13:07:39 [conn15] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:39 [conn15] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:39 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 15ms sequenceNumber: 22 version: 4|3||5023b60df6943830424ba966 based on: 4|1||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:39 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 49 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:39 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 49 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:39 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 49 } -->> { : MaxKey, : MaxKey } |
| m30999| Thu Aug 09 13:07:39 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 3|15||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 27 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 } (splitThreshold 943718) size: 1049080 (migrate suggested) |
| m30002| Thu Aug 09 13:07:39 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 55 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_49", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:40 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b61c4ce9af875b837bfa |
| m30002| Thu Aug 09 13:07:40 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:40-12", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517660028), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 55 }, lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 55 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30002| Thu Aug 09 13:07:40 [conn5] splitChunk accepted at version 4|3||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:40 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 37ms sequenceNumber: 23 version: 4|5||5023b60df6943830424ba966 based on: 4|3||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 55 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 55 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 55 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 55 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 55 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 58 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_55", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:40 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30999| Thu Aug 09 13:07:40 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 4|3||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 49 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 55 } (splitThreshold 943718) size: 1049072 (migrate suggested) |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b61c4ce9af875b837bfe |
| m30002| Thu Aug 09 13:07:40 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:40-13", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517660121), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 55 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 55 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 58 }, lastmod: Timestamp 4000|6, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 58 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|7, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30002| Thu Aug 09 13:07:40 [conn5] splitChunk accepted at version 4|5||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:40 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 15ms sequenceNumber: 24 version: 4|7||5023b60df6943830424ba966 based on: 4|5||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 58 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 58 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 58 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 58 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 58 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 61 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_58", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:40 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b61c4ce9af875b837c02 |
| m30002| Thu Aug 09 13:07:40 [conn5] splitChunk accepted at version 4|7||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:40 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 4|5||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 55 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 58 } (splitThreshold 943718) size: 1049056 (migrate suggested) |
| m30002| Thu Aug 09 13:07:40 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:40-14", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517660199), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 58 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 58 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 61 }, lastmod: Timestamp 4000|8, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 61 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|9, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30999| Thu Aug 09 13:07:40 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 44ms sequenceNumber: 25 version: 4|9||5023b60df6943830424ba966 based on: 4|7||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 61 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 61 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 61 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 61 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 61 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 64 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_61", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:40 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b61c4ce9af875b837c06 |
| m30002| Thu Aug 09 13:07:40 [conn5] splitChunk accepted at version 4|9||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:40 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:40-15", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517660309), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 61 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 61 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 64 }, lastmod: Timestamp 4000|10, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 64 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|11, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30999| Thu Aug 09 13:07:40 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 15ms sequenceNumber: 26 version: 4|11||5023b60df6943830424ba966 based on: 4|9||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:40 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 4|7||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 58 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 61 } (splitThreshold 943718) size: 1049040 (migrate suggested) |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 64 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 64 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 64 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 64 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 64 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 67 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_64", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:40 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b61c4ce9af875b837c0a |
| m30002| Thu Aug 09 13:07:40 [conn5] splitChunk accepted at version 4|11||5023b60df6943830424ba966 |
| m29000| Thu Aug 09 13:07:40 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn13] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:40 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:40-16", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517660387), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 64 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 64 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 67 }, lastmod: Timestamp 4000|12, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 67 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|13, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m29000| Thu Aug 09 13:07:40 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn7] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn13] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn13] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30999| Thu Aug 09 13:07:40 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 49ms sequenceNumber: 27 version: 4|13||5023b60df6943830424ba966 based on: 4|11||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:40 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 4|9||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 61 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 64 } (splitThreshold 943718) size: 1049028 (migrate suggested) |
| m29000| Thu Aug 09 13:07:40 [conn15] warning: ClientCursor::find(): cursor not found in map -1 (ok after a drop) |
| m29000| Thu Aug 09 13:07:40 [conn15] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn15] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn15] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn15] warning: we think data is in ram but system says no |
| m29000| we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 67 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 67 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 67 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 67 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 67 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 70 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_67", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:40 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b61c4ce9af875b837c0e |
| m30002| Thu Aug 09 13:07:40 [conn5] splitChunk accepted at version 4|13||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:40 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:40-17", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517660496), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 67 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 67 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 70 }, lastmod: Timestamp 4000|14, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 70 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|15, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30999| Thu Aug 09 13:07:40 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 16ms sequenceNumber: 28 version: 4|15||5023b60df6943830424ba966 based on: 4|13||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:40 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 4|11||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 64 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 67 } (splitThreshold 943718) size: 1049020 (migrate suggested) |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 70 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 70 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 70 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 70 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:40 [conn5] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 70 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 73 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_70", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:40 [conn5] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b61c4ce9af875b837c12 |
| m30002| Thu Aug 09 13:07:40 [conn5] splitChunk accepted at version 4|15||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:40 [conn5] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:40-18", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50867", time: new Date(1344517660589), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 70 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 70 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 73 }, lastmod: Timestamp 4000|16, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 73 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|17, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:40 [conn5] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30999| Thu Aug 09 13:07:40 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 17ms sequenceNumber: 29 version: 4|17||5023b60df6943830424ba966 based on: 4|15||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:40 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 4|13||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 67 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 70 } (splitThreshold 943718) size: 1049008 (migrate suggested) |
| m29000| Thu Aug 09 13:07:40 [conn4] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn4] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn4] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn4] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn4] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn4] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn6] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:40 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 4|15||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 70 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 73 } (splitThreshold 943718) size: 1049000 (migrate suggested) |
| m29000| Thu Aug 09 13:07:40 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn5] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn6] warning: we think data is in ram but system says no |
| m29000| Thu Aug 09 13:07:40 [conn6] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:40 [Balancer] distributed lock 'balancer/AMAZONA-J7UBCUV:30999:1344517630:41' acquired, ts : 5023b61cf6943830424ba969 |
| m30999| Thu Aug 09 13:07:40 [Balancer] distributed lock 'balancer/AMAZONA-J7UBCUV:30999:1344517630:41' unlocked. |
| m30002| Thu Aug 09 13:07:41 [conn4] insert sharded_files_id_n.fs.chunks keyUpdates:0 locks(micros) w:1417 889ms |
| m30002| Thu Aug 09 13:07:41 [conn2] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 73 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:41 [conn2] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 73 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:41 [conn2] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 73 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:41 [conn2] max number of requested split points reached (2) before the end of chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 73 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:41 [conn2] received splitChunk request: { splitChunk: "sharded_files_id_n.fs.chunks", keyPattern: { files_id: 1.0, n: 1.0 }, min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 73 }, max: { files_id: MaxKey, n: MaxKey }, from: "shard0002", splitKeys: [ { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 76 } ], shardId: "sharded_files_id_n.fs.chunks-files_id_ObjectId('5023b60dbebcdb51f55100d1')n_73", configdb: "localhost:29000" } |
| m30002| Thu Aug 09 13:07:41 [conn2] created new distributed lock for sharded_files_id_n.fs.chunks on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) |
| m30002| Thu Aug 09 13:07:41 [conn2] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' acquired, ts : 5023b61d4ce9af875b837c16 |
| m30002| Thu Aug 09 13:07:41 [conn2] splitChunk accepted at version 4|17||5023b60df6943830424ba966 |
| m30002| Thu Aug 09 13:07:41 [conn2] about to log metadata event: { _id: "AMAZONA-J7UBCUV-2012-08-09T13:07:41-19", server: "AMAZONA-J7UBCUV", clientAddr: "127.0.0.1:50855", time: new Date(1344517661557), what: "split", ns: "sharded_files_id_n.fs.chunks", details: { before: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 73 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 73 }, max: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 76 }, lastmod: Timestamp 4000|18, lastmodEpoch: ObjectId('5023b60df6943830424ba966') }, right: { min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 76 }, max: { files_id: MaxKey, n: MaxKey }, lastmod: Timestamp 4000|19, lastmodEpoch: ObjectId('5023b60df6943830424ba966') } } } |
| m30002| Thu Aug 09 13:07:41 [conn2] distributed lock 'sharded_files_id_n.fs.chunks/AMAZONA-J7UBCUV:30002:1344517650:29293' unlocked. |
| m30999| Thu Aug 09 13:07:40 [Balancer] shard0002 is unavailable |
| m30999| Thu Aug 09 13:07:41 [conn5] ChunkManager: time to load chunks for sharded_files_id_n.fs.chunks: 17ms sequenceNumber: 30 version: 4|19||5023b60df6943830424ba966 based on: 4|17||5023b60df6943830424ba966 |
| m30999| Thu Aug 09 13:07:41 [conn5] autosplitted sharded_files_id_n.fs.chunks shard: ns:sharded_files_id_n.fs.chunks at: shard0002:localhost:30002 lastmod: 4|17||000000000000000000000000 min: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 73 } max: { files_id: MaxKey, n: MaxKey } on: { files_id: ObjectId('5023b60dbebcdb51f55100d1'), n: 76 } (splitThreshold 943718) size: 1048992 (migrate suggested) |
| m30002| Thu Aug 09 13:07:41 [conn2] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 76 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:41 [conn2] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 76 } -->> { : MaxKey, : MaxKey } |
| m30000| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30001| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:41 [conn2] request split points lookup for chunk sharded_files_id_n.fs.chunks { : ObjectId('5023b60dbebcdb51f55100d1'), : 76 } -->> { : MaxKey, : MaxKey } |
| m30002| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:41 [conn4] command sharded_files_id_n.$cmd command: { filemd5: ObjectId('5023b60dbebcdb51f55100d1'), root: "fs", partialOk: true, startAt: 9, md5state: BinData } ntoreturn:1 keyUpdates:0 numYields: 42 locks(micros) r:47021 reslen:197 156ms |
| m30002| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30000| Thu Aug 09 13:07:41 [conn4] warning: we think data is in ram but system says no |
| m30002| Thu Aug 09 13:07:41 [conn8] warning: we think data is in ram but system says no |
| m30999| Thu Aug 09 13:07:41 [WriteBackListener-localhost:30002] GLE is { singleShard: "localhost:30002", n: 0, connectionId: 8, err: null, ok: 1.0 } |
| m30999| Thu Aug 09 13:07:42 [WriteBackListener-localhost:30002] GLE is { singleShard: "localhost:30002", n: 0, connectionId: 8, err: null, ok: 1.0 } |
| m30999| Thu Aug 09 13:07:42 [WriteBackListener-localhost:30002] GLE is { singleShard: "localhost:30002", n: 0, connectionId: 8, err: null, ok: 1.0 } |
| sh2120| added file: { _id: ObjectId('5023b60dbebcdb51f55100d1'), filename: "mongod.exe", chunkSize: 262144, uploadDate: new Date(1344517661822), md5: "ee4f03d9e587d723c4c6bb99531b3b30", length: 20939264 } |
| m30999| Thu Aug 09 13:07:42 [conn5] end connection 127.0.0.1:50880 (1 connection now open) |
| fileObj: { |
| "_id" : ObjectId("5023b60dbebcdb51f55100d1"), |
| "filename" : "mongod.exe", |
| "chunkSize" : 262144, |
| "uploadDate" : ISODate("2012-08-09T13:07:41.822Z"), |
| "md5" : "ee4f03d9e587d723c4c6bb99531b3b30", |
| "length" : 20939264 |
| sh2120| done! |
| assert: ["4c933cd6fa8b299c8f87c96b06aaf38f"] != ["ee4f03d9e587d723c4c6bb99531b3b30"] are not equal : undefined |
| Error("Printing Stack Trace")@:0 |
| ()@src/mongo/shell/utils.js:37 |
| ("[\"4c933cd6fa8b299c8f87c96b06aaf38f\"] != [\"ee4f03d9e587d723c4c6bb99531b3b30\"] are not equal : undefined")@src/mongo/shell/utils.js:58 |
| ("4c933cd6fa8b299c8f87c96b06aaf38f","ee4f03d9e587d723c4c6bb99531b3b30")@src/mongo/shell/utils.js:88 |
| testGridFS("sharded_files_id_n")@D:\slave\Windows_64bit_DEBUG\mongo\jstests\sharding\gridfs.js:27 |
| @D:\slave\Windows_64bit_DEBUG\mongo\jstests\sharding\gridfs.js:59 |
| |
| Thu Aug 09 13:07:42 uncaught exception: ["4c933cd6fa8b299c8f87c96b06aaf38f"] != ["ee4f03d9e587d723c4c6bb99531b3b30"] are not equal : undefined |
| failed to load: D:\slave\Windows_64bit_DEBUG\mongo\jstests\sharding\gridfs.js |
| m29000| Thu Aug 09 13:07:42 [initandlisten] connection accepted from 127.0.0.1:50910 #16 (16 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn16] terminating, shutdown command received |
| m29000| Thu Aug 09 13:07:42 dbexit: shutdown called |
| m29000| Thu Aug 09 13:07:42 [conn16] shutdown: going to close listening sockets... |
| m29000| Thu Aug 09 13:07:42 [conn16] closing listening socket: 452 |
| m29000| Thu Aug 09 13:07:42 [conn16] shutdown: going to flush diaglog... |
| m29000| Thu Aug 09 13:07:42 [conn16] shutdown: going to close sockets... |
| m29000| Thu Aug 09 13:07:42 [conn16] shutdown: waiting for fs preallocator... |
| m29000| Thu Aug 09 13:07:42 [conn16] shutdown: lock for final commit... |
| m29000| Thu Aug 09 13:07:42 [conn16] shutdown: final commit... |
| Thu Aug 09 13:07:42 DBClientCursor::init call() failed |
| } |
2012-08-09 09:07:43 EDT | m29000| Thu Aug 09 13:07:42 [conn1] end connection 127.0.0.1:50844 (15 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn4] end connection 127.0.0.1:50850 (15 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn6] end connection 127.0.0.1:50852 (15 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn7] end connection 127.0.0.1:50871 (15 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn7] thread conn7 stack usage was 28232 bytes, which is the most so far |
| m29000| Thu Aug 09 13:07:42 [conn4] thread conn4 stack usage was 28312 bytes, which is the most so far |
| m29000| Thu Aug 09 13:07:42 [conn13] end connection 127.0.0.1:50894 (13 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn6] thread conn6 stack usage was 30640 bytes, which is the most so far |
| m29000| Thu Aug 09 13:07:42 [conn11] end connection 127.0.0.1:50887 (13 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn15] end connection 127.0.0.1:50906 (13 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn13] thread conn13 stack usage was 40520 bytes, which is the most so far |
| m29000| Thu Aug 09 13:07:42 [conn9] end connection 127.0.0.1:50881 (14 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn2] end connection 127.0.0.1:50845 (13 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn8] end connection 127.0.0.1:50879 (15 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn12] end connection 127.0.0.1:50892 (13 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn5] end connection 127.0.0.1:50851 (13 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn3] end connection 127.0.0.1:50849 (13 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn10] end connection 127.0.0.1:50886 (13 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn14] end connection 127.0.0.1:50901 (13 connections now open) |
| m29000| Thu Aug 09 13:07:42 [conn16] shutdown: closing all files... |
| m29000| Thu Aug 09 13:07:42 [conn16] closeAllFiles() finished |
| m29000| Thu Aug 09 13:07:42 [conn16] journalCleanup... |
| m29000| Thu Aug 09 13:07:42 [conn16] removeJournalFiles |
| m29000| Thu Aug 09 13:07:42 [conn16] shutdown: removing fs lock... |
| m29000| Thu Aug 09 13:07:42 [conn9] thread conn9 stack usage was 40136 bytes, which is the most so far |
| m29000| Thu Aug 09 13:07:42 dbexit: really exiting now |
| m30000| Thu Aug 09 13:07:43 [conn8] thread conn8 stack usage was 11376 bytes, which is the most so far |
| m30000| Thu Aug 09 13:07:43 [conn8] terminating, shutdown command received |
| m30000| Thu Aug 09 13:07:43 [initandlisten] connection accepted from 127.0.0.1:50912 #8 (8 connections now open) |
| m30000| Thu Aug 09 13:07:43 [conn8] shutdown: going to close listening sockets... |
| m30000| Thu Aug 09 13:07:43 [conn8] closing listening socket: 416 |
| m30000| Thu Aug 09 13:07:43 [conn8] closing listening socket: 436 |
| m30000| Thu Aug 09 13:07:43 [conn8] shutdown: going to flush diaglog... |
| m30000| Thu Aug 09 13:07:43 [conn8] shutdown: going to close sockets... |
| m30000| Thu Aug 09 13:07:43 [conn8] shutdown: waiting for fs preallocator... |
| m30000| Thu Aug 09 13:07:43 [conn8] shutdown: lock for final commit... |
| m30000| Thu Aug 09 13:07:43 [conn8] shutdown: final commit... |
| m30000| Thu Aug 09 13:07:43 dbexit: shutdown called |
| m30999| Thu Aug 09 13:07:43 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed |
| m30001| Thu Aug 09 13:07:43 [conn6] end connection 127.0.0.1:50882 (8 connections now open) |
| m30999| Thu Aug 09 13:07:43 [WriteBackListener-localhost:30000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('5023b5fef6943830424ba961') } |
| m30000| Thu Aug 09 13:07:43 [websvr] thread websvr stack usage was 19240 bytes, which is the most so far |
| m30999| Thu Aug 09 13:07:43 [WriteBackListener-localhost:30000] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:29000 |
| m30000| Thu Aug 09 13:07:43 [conn5] end connection 127.0.0.1:50865 (7 connections now open) |
| m30000| Thu Aug 09 13:07:43 [conn1] end connection 127.0.0.1:50841 (7 connections now open) |
| m30000| Thu Aug 09 13:07:43 [conn3] end connection 127.0.0.1:50856 (7 connections now open) |
| m30999| Thu Aug 09 13:07:43 [WriteBackListener-localhost:30000] dev: lastError==0 won't report:DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('5023b5fef6943830424ba961') } |
| Thu Aug 09 13:07:43 DBClientCursor::init call() failed |
2012-08-09 09:07:45 EDT | m30000| Thu Aug 09 13:07:43 [conn5] thread conn5 stack usage was 56760 bytes, which is the most so far |
| m30000| Thu Aug 09 13:07:43 [conn4] end connection 127.0.0.1:50860 (7 connections now open) |
| m30000| Thu Aug 09 13:07:43 [conn6] end connection 127.0.0.1:50883 (5 connections now open) |
| m30000| Thu Aug 09 13:07:43 [conn7] end connection 127.0.0.1:50903 (4 connections now open) |
| m30000| Thu Aug 09 13:07:44 [conn8] shutdown: closing all files... |
| m30000| Thu Aug 09 13:07:44 [conn8] closeAllFiles() finished |
| m30000| Thu Aug 09 13:07:44 [conn8] journalCleanup... |
| m30000| Thu Aug 09 13:07:44 [conn8] removeJournalFiles |
| m30000| Thu Aug 09 13:07:44 [conn8] shutdown: removing fs lock... |
| m30000| Thu Aug 09 13:07:44 dbexit: really exiting now |
| m30001| Thu Aug 09 13:07:45 [initandlisten] connection accepted from 127.0.0.1:50913 #10 (9 connections now open) |
| m30001| Thu Aug 09 13:07:45 [conn10] terminating, shutdown command received |
| m30001| Thu Aug 09 13:07:45 dbexit: shutdown called |
| m30001| Thu Aug 09 13:07:45 [conn10] shutdown: going to close listening sockets... |
| m30001| Thu Aug 09 13:07:45 [conn10] closing listening socket: 428 |
| m30001| Thu Aug 09 13:07:45 [conn10] closing listening socket: 440 |
| m30001| Thu Aug 09 13:07:45 [conn10] shutdown: going to flush diaglog... |
| m30001| Thu Aug 09 13:07:45 [conn10] shutdown: going to close sockets... |
| m30001| Thu Aug 09 13:07:45 [conn10] shutdown: waiting for fs preallocator... |
| m30001| Thu Aug 09 13:07:45 [conn10] shutdown: lock for final commit... |
| m30999| Thu Aug 09 13:07:43 [WriteBackListener-localhost:30000] ERROR: backgroundjob WriteBackListener-localhost:30000error: socket exception [SEND_ERROR] for 127.0.0.1:29000 |
| m30999| Thu Aug 09 13:07:45 [WriteBackListener-localhost:30001] DBClientCursor::init call() failed |
| m30999| Thu Aug 09 13:07:45 [WriteBackListener-localhost:30001] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30001 ns: admin.$cmd query: { writebacklisten: ObjectId('5023b5fef6943830424ba961') } |
| m30999| Thu Aug 09 13:07:45 [WriteBackListener-localhost:30001] dev: lastError==0 won't report:DBClientBase::findN: transport error: localhost:30001 ns: admin.$cmd query: { writebacklisten: ObjectId('5023b5fef6943830424ba961') } |
| Thu Aug 09 13:07:45 DBClientCursor::init call() failed |
| m30999| Thu Aug 09 13:07:45 [WriteBackListener-localhost:30001] ERROR: backgroundjob WriteBackListener-localhost:30001error: socket exception [SEND_ERROR] for 127.0.0.1:29000 |
| m30001| Thu Aug 09 13:07:45 [conn5] end connection 127.0.0.1:50866 (8 connections now open) |
| m30001| Thu Aug 09 13:07:45 [conn10] shutdown: final commit... |
| m30002| Thu Aug 09 13:07:45 [conn6] end connection 127.0.0.1:50888 (7 connections now open) |
| m30001| Thu Aug 09 13:07:45 [conn5] thread conn5 stack usage was 56760 bytes, which is the most so far |
| m30001| Thu Aug 09 13:07:45 [conn3] end connection 127.0.0.1:50857 (8 connections now open) |
| m30001| Thu Aug 09 13:07:45 [conn7] end connection 127.0.0.1:50889 (8 connections now open) |
| m30001| Thu Aug 09 13:07:45 [conn1] end connection 127.0.0.1:50842 (8 connections now open) |
| m30001| Thu Aug 09 13:07:45 [conn9] end connection 127.0.0.1:50904 (8 connections now open) |
| m30001| Thu Aug 09 13:07:45 [conn8] end connection 127.0.0.1:50890 (7 connections now open) |
| m30001| Thu Aug 09 13:07:45 [conn4] end connection 127.0.0.1:50861 (4 connections now open) |
| m30001| Thu Aug 09 13:07:45 [conn10] shutdown: closing all files... |
| m30001| Thu Aug 09 13:07:45 [conn10] closeAllFiles() finished |
| m30001| Thu Aug 09 13:07:45 [conn10] journalCleanup... |
| m30001| Thu Aug 09 13:07:45 [conn10] removeJournalFiles |
| m30001| Thu Aug 09 13:07:45 [conn10] shutdown: removing fs lock... |
| m30001| Thu Aug 09 13:07:45 dbexit: really exiting now |
| m30002| Thu Aug 09 13:07:46 [initandlisten] connection accepted from 127.0.0.1:50916 #9 (8 connections now open) |
| m30002| Thu Aug 09 13:07:46 [conn9] terminating, shutdown command received |
| m30002| Thu Aug 09 13:07:46 dbexit: shutdown called |
| m30002| Thu Aug 09 13:07:46 [conn9] shutdown: going to close listening sockets... |
| m30002| Thu Aug 09 13:07:46 [conn9] closing listening socket: 436 |
| m30002| Thu Aug 09 13:07:46 [conn9] closing listening socket: 452 |
| m30002| Thu Aug 09 13:07:46 [conn9] shutdown: going to flush diaglog... |
| m30002| Thu Aug 09 13:07:46 [conn9] shutdown: going to close sockets... |
| m30002| Thu Aug 09 13:07:46 [conn9] shutdown: waiting for fs preallocator... |
| m30002| Thu Aug 09 13:07:46 [conn9] shutdown: lock for final commit... |
| m30002| Thu Aug 09 13:07:46 [conn9] shutdown: final commit... |
| m30999| Thu Aug 09 13:07:45 [WriteBackListener-localhost:30001] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:29000 |
| Thu Aug 09 13:07:46 DBClientCursor::init call() failed |
2012-08-09 09:07:48 EDT | m30002| Thu Aug 09 13:07:46 [conn4] end connection 127.0.0.1:50862 (7 connections now open) |
| m30002| Thu Aug 09 13:07:46 [conn3] end connection 127.0.0.1:50858 (7 connections now open) |
| m30002| Thu Aug 09 13:07:46 [conn1] end connection 127.0.0.1:50843 (7 connections now open) |
| m30002| Thu Aug 09 13:07:46 [conn4] thread conn4 stack usage was 31240 bytes, which is the most so far |
| m30999| Thu Aug 09 13:07:46 [WriteBackListener-localhost:30002] DBClientCursor::init call() failed |
| m30999| Thu Aug 09 13:07:46 [WriteBackListener-localhost:30002] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30002 ns: admin.$cmd query: { writebacklisten: ObjectId('5023b5fef6943830424ba961') } |
| m30999| Thu Aug 09 13:07:46 [WriteBackListener-localhost:30002] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:29000 |
| m30999| Thu Aug 09 13:07:46 [WriteBackListener-localhost:30002] ERROR: backgroundjob WriteBackListener-localhost:30002error: socket exception [SEND_ERROR] for 127.0.0.1:29000 |
| m30002| Thu Aug 09 13:07:46 [conn5] end connection 127.0.0.1:50867 (6 connections now open) |
| m30002| Thu Aug 09 13:07:46 [conn2] end connection 127.0.0.1:50855 (6 connections now open) |
| m30002| Thu Aug 09 13:07:46 [conn5] thread conn5 stack usage was 56760 bytes, which is the most so far |
| m30002| Thu Aug 09 13:07:46 [conn8] end connection 127.0.0.1:50905 (4 connections now open) |
| m30002| Thu Aug 09 13:07:47 [conn9] shutdown: closing all files... |
| m30002| Thu Aug 09 13:07:47 [conn9] closeAllFiles() finished |
| m30002| Thu Aug 09 13:07:47 [conn9] journalCleanup... |
| m30002| Thu Aug 09 13:07:47 [conn9] removeJournalFiles |
| m30002| Thu Aug 09 13:07:47 [conn9] shutdown: removing fs lock... |
| m30002| Thu Aug 09 13:07:47 dbexit: really exiting now |
| m30999| Thu Aug 09 13:07:46 [WriteBackListener-localhost:30002] dev: lastError==0 won't report:DBClientBase::findN: transport error: localhost:30002 ns: admin.$cmd query: { writebacklisten: ObjectId('5023b5fef6943830424ba961') } |
| m30999| Thu Aug 09 13:07:47 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:29000 |
| m30999| Thu Aug 09 13:07:48 [conn6] terminating, shutdown command received |
| m30999| Thu Aug 09 13:07:48 [conn6] dbexit: shutdown called rc:0 shutdown called |
| m30999| Thu Aug 09 13:07:48 [mongosMain] connection accepted from 127.0.0.1:50918 #6 (2 connections now open) |
| Thu Aug 09 13:07:48 Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:30999 |
| Thu Aug 09 13:07:48 DBClientCursor::init call() failed |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| Thu Aug 09 13:07:48 SocketException: remote: 127.0.0.1:30999 error: 9001 socket exception [1] server [127.0.0.1:30999] |