-
Type: Bug
-
Resolution: Done
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
Just experienced a number of failing SyncClientResetIntegrationTests tests all failing with a timeout when initializing initial data in this CI build:
io.realm.kotlin.mongodb.exceptions.DownloadingRealmTimeOutException: Realm did not manage to download all initial data in time: /data/user/0/io.realm.sync.testapp/files/mongodb-realm/test-app-flex-ohwet/642d4a0e05b2ec8792c5977f/default.realm, timeout: 1m. at io.realm.kotlin.mongodb.internal.SyncConfigurationImpl.initializeRealmData(SyncConfigurationImpl.kt:138) at io.realm.kotlin.mongodb.internal.SyncConfigurationImpl$initializeRealmData$1.invokeSuspend(Unknown Source:16) at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) at kotlinx.coroutines.internal.ScopeCoroutine.afterResume(Scopes.kt:33) at kotlinx.coroutines.AbstractCoroutine.resumeWith(AbstractCoroutine.kt:102) at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46) at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:104) at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284) at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85) at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59) at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source:1) at io.realm.kotlin.internal.platform.CoroutineUtilsSharedJvmKt.runBlocking(CoroutineUtilsSharedJvm.kt:22) at io.realm.kotlin.internal.platform.CoroutineUtilsSharedJvmKt.runBlocking$default(CoroutineUtilsSharedJvm.kt:21) at io.realm.kotlin.internal.RealmImpl.<init>(RealmImpl.kt:114) at io.realm.kotlin.internal.RealmImpl.<init>(Unknown Source:0) at io.realm.kotlin.internal.RealmImpl$Companion.create$io_realm_kotlin_library(RealmImpl.kt:265) at io.realm.kotlin.Realm$Companion.open(Realm.kt:81) at io.realm.kotlin.test.mongodb.shared.SyncClientResetIntegrationTests.recoverOrDiscardUnsyncedChanges_recover(SyncClientResetIntegrationTests.kt:1297) at io.realm.kotlin.test.mongodb.shared.SyncClientResetIntegrationTests.access$recoverOrDiscardUnsyncedChanges_recover(SyncClientResetIntegrationTests.kt:73) at io.realm.kotlin.test.mongodb.shared.SyncClientResetIntegrationTests$recoverOrDiscardUnsyncedChanges_recover_flx$1$1.invoke(SyncClientResetIntegrationTests.kt:1263) at io.realm.kotlin.test.mongodb.shared.SyncClientResetIntegrationTests$recoverOrDiscardUnsyncedChanges_recover_flx$1$1.invoke(SyncClientResetIntegrationTests.kt:1262) at io.realm.kotlin.test.mongodb.shared.SyncClientResetIntegrationTests$TestEnvironment.performTest(SyncClientResetIntegrationTests.kt:135) at io.realm.kotlin.test.mongodb.shared.SyncClientResetIntegrationTests$Companion.performFlxTest-8Mi8wO0(SyncClientResetIntegrationTests.kt:273) at io.realm.kotlin.test.mongodb.shared.SyncClientResetIntegrationTests$Companion.performFlxTest-8Mi8wO0$default(SyncClientResetIntegrationTests.kt:261) at io.realm.kotlin.test.mongodb.shared.SyncClientResetIntegrationTests$recoverOrDiscardUnsyncedChanges_recover_flx$1.invokeSuspend(SyncClientResetIntegrationTests.kt:1262) at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106) at kotlinx.coroutines.EventLoopImplBase.processNextEvent(EventLoop.common.kt:284) at kotlinx.coroutines.BlockingCoroutine.joinBlocking(Builders.kt:85) at kotlinx.coroutines.BuildersKt__BuildersKt.runBlocking(Builders.kt:59) at kotlinx.coroutines.BuildersKt.runBlocking(Unknown Source:1) at io.realm.kotlin.internal.platform.CoroutineUtilsSharedJvmKt.runBlocking(CoroutineUtilsSharedJvm.kt:22) at io.realm.kotlin.internal.platform.CoroutineUtilsSharedJvmKt.runBlocking$default(CoroutineUtilsSharedJvm.kt:21) at io.realm.kotlin.test.mongodb.shared.SyncClientResetIntegrationTests.recoverOrDiscardUnsyncedChanges_recover_flx(SyncClientResetIntegrationTests.kt:1261)
Have not been able to reproduce locally in ~100 runs or on CI. Put up a https://github.com/realm/realm-kotlin/pull/1335 with additional logging to catch a trace of what is stalling, but so far no luck.