Realm has reported increased memory usage from the driver after upgrading to v1.11.0 of the driver. At first glance, this seems to be related to the new memory pooling for operation execution introduced in GODRIVER-2021. In particular, the connection.read method is showing a significant amount of memory usage. qingyang.hu@mongodb.com theorizes these memory spikes are due to our lack of pool "cap" (we never release pool resources, so memory usage remains high after a spike in high-memory messages returned on the wire). We should investigate whether adding a cap reduces memory usage and, if so, what to set the cap to.
Here are some provided memory profiles:
(pprof) top20 Showing nodes accounting for 21630.18MB, 95.17% of 22728.03MB total Dropped 1009 nodes (cum <= 113.64MB) Showing top 20 nodes out of 161 flat flat% sum% cum cum% 20104.58MB 88.46% 88.46% 20111.34MB 88.49% go.mongodb.org/mongo-driver/x/mongo/driver/topology.(*connection).read 315.07MB 1.39% 89.84% 315.07MB 1.39% reflect.New 199.07MB 0.88% 90.72% 199.07MB 0.88% github.com/dop251/goja.(*baseObject)._put 165.53MB 0.73% 91.45% 166.53MB 0.73% go.mongodb.org/mongo-driver/x/mongo/driver.Operation.createMsgWireMessage 162.28MB 0.71% 92.16% 237.28MB 1.04% github.com/10gen/baas/utils/xjson.decodeExtendedBsonD 148.75MB 0.65% 92.82% 427.32MB 1.88% github.com/10gen/baas/app/metadata.NewStoreWithOptions 134.93MB 0.59% 93.41% 134.93MB 0.59% go.mongodb.org/mongo-driver/bson/bsonrw.(*valueReader).readString 118.62MB 0.52% 93.93% 118.62MB 0.52% github.com/10gen/baas/function/execution/vm.NewEventLoop 99.05MB 0.44% 94.37% 119.05MB 0.52% github.com/xeipuuv/gojsonschema.NewSubSchema (inline) 93.01MB 0.41% 94.78% 235.04MB 1.03% github.com/dop251/goja.(*Runtime).newNativeFunc 76MB 0.33% 95.11% 238.28MB 1.05% github.com/10gen/baas/utils/xjson.DecodeExtended 3.77MB 0.017% 95.13% 20117.63MB 88.51% go.mongodb.org/mongo-driver/x/mongo/driver.Operation.roundTrip
ROUTINE ======================== go.mongodb.org/mongo-driver/x/mongo/driver/topology.(*connection).read in pkg/mod/github.com/mongodb-forks/mongo-go-driver@v1.11.0-baas/x/mongo/driver/topology/connection.go 19.63GB 19.64GB (flat, cum) 88.49% of Total . . 455: return dst, errResponseTooLarge.Error(), errResponseTooLarge . . 456: } . . 457: . . 458: if int(size) > cap(dst) { . . 459: // Since we can't grow this slice without allocating, just allocate an entirely new slice. 19.63GB 19.63GB 460: dst = make([]byte, 0, size) . . 461: } . . 462: // We need to ensure we don't accidentally read into a subsequent wire message, so we set the . . 463: // size to read exactly this wire message. . . 464: dst = dst[:size] . . 465: copy(dst, sizeBuf[:]) . . 466: . 6.76MB 467: _, err = io.ReadFull(c.nc, dst[4:]) . . 468: if err != nil { . . 469: return dst, "incomplete read of full message", err . . 470: } . . 471: . . 472: return dst, "", nil ROUTINE ======================== go.mongodb.org/mongo-driver/x/mongo/driver/topology.(*connection).readWireMessage in pkg/mod/github.com/mongodb-forks/mongo-go-driver@v1.11.0-baas/x/mongo/driver/topology/connection.go 0 19.64GB (flat, cum) 88.49% of Total . . 397: . . 398: if err := c.nc.SetReadDeadline(deadline); err != nil { . . 399: return nil, ConnectionError{ConnectionID: c.id, Wrapped: err, message: "failed to set read deadline"} . . 400: } . . 401: . 19.64GB 402: dst, errMsg, err := c.read(ctx, dst) . . 403: if err != nil { . . 404: // We closeConnection the connection because we don't know if there are other bytes left to read. . . 405: c.close() . . 406: message := errMsg . . 407: if err == io.EOF {
- is caused by
-
GODRIVER-1670 Use buffer pooling and copying return values to reduce memory allocation
- Closed