Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-28360

Incident where mongod shutdown via the signalProcessingThread when it received a SIGINT

    • Type: Icon: Question Question
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 3.2.12
    • Component/s: None
    • None

      Hi,

      We are running a 3.2.12 WiredTiger MongoDB replica set that was started with fork:true.

      The Primary of this replica set was shutdown via the signalProcessingThread when it received a SIGINT.

      We do not believe this was initiated via an external source.

      Is there something within MongoDB 3.2 WiredTiger that could have caused this to happen? We found a couple of recent examples of other users reporting similar symptoms:

      2017-03-11T06:24:54.217-0800 I CONTROL  [initandlisten] MongoDB starting : pid=11737 port=12345 dbpath=/my/dbpath 64-bit host=mymachine.acme.com
      2017-03-11T06:24:54.217-0800 I CONTROL  [initandlisten] db version v3.2.12
      2017-03-11T06:24:54.217-0800 I CONTROL  [initandlisten] git version: ef3e1bc78e997f0d9f22f45aeb1d8e3b6ac14a14
      2017-03-11T06:24:54.217-0800 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014
      2017-03-11T06:24:54.217-0800 I CONTROL  [initandlisten] allocator: tcmalloc
      2017-03-11T06:24:54.217-0800 I CONTROL  [initandlisten] modules: none
      2017-03-11T06:24:54.217-0800 I CONTROL  [initandlisten] build environment:
      2017-03-11T06:24:54.217-0800 I CONTROL  [initandlisten]     distmod: ubuntu1404
      2017-03-11T06:24:54.217-0800 I CONTROL  [initandlisten]     distarch: x86_64
      2017-03-11T06:24:54.217-0800 I CONTROL  [initandlisten]     target_arch: x86_64
      2017-03-11T06:24:54.217-0800 I CONTROL  [initandlisten] options: { net: { http: { enabled: false }, maxIncomingConnections: 20000, port: 12345, ssl: { CAFile: "REDACTED", PEMKeyFile: "REDACTED", PEMKeyPassword: "<password>", mode: "preferSSL", weakCertificateValidation: true } }, processManagement: { fork: true, pidFilePath: "REDACTED" }, replication: { replSet: "myreplset" }, security: { authorization: "enabled", keyFil\
      e: "REDACTED" }, storage: { dbPath: "/my/dbpath", directoryPerDB: true, engine: "wiredTiger" }, systemLog: { destination: "file", logAppend: true, path: "/my/logs/mongodb.log" } }
      2017-03-11T06:24:54.218-0800 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=8G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
      2017-03-11T06:24:54.879-0800 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs
      2017-03-11T06:24:54.879-0800 I STORAGE  [initandlisten] The size storer reports that the oplog contains 4770444 records totaling to 1010898408 bytes
      2017-03-11T06:24:54.882-0800 I STORAGE  [initandlisten] Sampling from the oplog between Mar 10 19:31:26:15 and Mar 11 06:24:43:4e to determine where to place markers for truncation
      2017-03-11T06:24:54.882-0800 I STORAGE  [initandlisten] Taking 30 samples and assuming that each section of oplog contains approximately 1569182 records totaling to 332523258 bytes
      
      ...
      
      2017-03-11T14:27:27.068-0800 I CONTROL  [signalProcessingThread] got signal 2 (Interrupt), will terminate after current cmd ends
      2017-03-11T14:27:27.071-0800 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
      

      Thanks in advance for your help. Please let us know if you could use any more information.

      Best regards,
      Angela

            Assignee:
            schwerin@mongodb.com Andy Schwerin
            Reporter:
            akung0324 Angela Shulman
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

              Created:
              Updated:
              Resolved: