Uploaded image for project: 'Go Driver'
  1. Go Driver
  2. GODRIVER-1468

Error connecting to atlas via go driver when using 2 pods

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: 1.2.1
    • Component/s: Connections
    • None
    • Environment:
      Go v 1.13

       
      Description in customers words.
       
      GO version : 1.13
       
      mongo go-driver : 1.2.1
       

      • Our go application is running on AWS-EKS. It is connecting to an Atlas instance. 
      • As long as the ReplicaSet for the deployment is set to 1, everything works as expected..
      • as soon as I change the ReplicaSet to 2 for my application deployment, the first instance (pod) of the application comes up successfully, but the second pod fails..
      • it fails on executing line
            mgo, err = mongo.NewClient(options.Client().ApplyURI(uri))
      • the value of uri is mongodb+srv://lockeradmin:XXXX@lockercluster-urwgb.mongodb.net
        I have masked the value of the password.
         
      • the error I see is..
        error parsing uri: lookup mongodb.tcp.lockercluster-urwgb.mongodb.net on 172.20.0.10:53: read udp 10.53.8.160:49134->172.20.0.10:53: i/o timeout
         
        As part of debugging this, we launched a bare busybox image on the K8s cluster, loaded it with the mongo shell and tried to connect.. it did connect to the above instance successfully.
        Strangely, I have a SECOND service that runs on the same EKS cluster connecting to the same Atlas DB. And that app is also exhibiting the same behavior.
        However, I am able to successfully run 1 instance of each of the app successfully.
         

            Assignee:
            Unassigned Unassigned
            Reporter:
            dhananjay.ghevde@mongodb.com Dhananjay Ghevde
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: