-
Type: Task
-
Resolution: Unresolved
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
None
-
Atlas Streams
During some unrelated testing, the stream processor did not update its stats for multiple minutes for the following pipeline. The input topic has 1000 docs, each with a string field of size 1KB.
{ $source: { connectionName: kafkaPlaintextName, topic: topicName1, config: {auto_offset_reset: "earliest"}, } }, { $addFields: { i: { $range: [0, 100] } } }, { $unwind: "$i" }, { $addFields: { ii: { $range: [ { $multiply: [ "$i", 100] }, { $multiply: [ { $add: [ "$i", 1 ] }, 100] } ] } } }, { $unwind: "$ii" }, { $emit: { connectionName: kafkaPlaintextName, topic: "$_id" } }
How can we avoid having a very long Executor runOnce cycle in this case?