-
Type: Bug
-
Resolution: Duplicate
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: Test
Summary
What is the problem or use case, what are we trying to achieve?
I'm trying to run the docker example from mongo-kafka repo and see if it's possible to process mongo change stream with spark (it might be an antipattern, i'm just curious, how all this is going to function together).
Everything launches successfully, but there is no records in the mongo that should be sinked from the kafka topic. The connector in UI is in degraded state, although I didn't change anything from the master. Checked the `connect` container logs there was an error I don't fully understand
java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: org/apache/kafka/connect/sink/ErrantRecordReporter at com.mongodb.kafka.connect.sink.MongoSinkTask.nopErrorReporter(MongoSinkTask.java:143) at com.mongodb.kafka.connect.sink.MongoSinkTask.createErrorReporter(MongoSinkTask.java:123) at com.mongodb.kafka.connect.sink.MongoSinkTask.start(MongoSinkTask.java:73) at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:300) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:189) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NoClassDefFoundError: org/apache/kafka/connect/sink/ErrantRecordReporter ... 12 more Caused by: java.lang.ClassNotFoundException: org.apache.kafka.connect.sink.ErrantRecordReporter at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 12 more
MY NOT EDUCATED GUESS: Looks like there was an error when deserializing the message, but connector was not able to handle it because the class is not there.
I've tried to replace 1.7.0 (current master) release with the 1.6.1 release, the result was the same.
Motivation
Who is the affected end user?
I guess, everyone who is trying to use mongo-kafka, that's why I placed major priority
How does this affect the end user?
End users can not try out the system, they are blocked.
How likely is it that this problem or use case will occur?
100% of the time, since I'm running docker-compose from repo without changes.
If the problem does occur, what are the consequences and how severe are they?
The connector in the example does not work, so the example does not work either.
Is this issue urgent?
IDK
Is this ticket required by a downstream team?
IDK
Is this ticket only for tests?
No
Cast of Characters
Engineering Lead:
Document Author:
POCers:
Product Owner:
Program Manager:
Stakeholders:
Channels & Docs
Slack Channel
[Scope Document|some.url]
[Technical Design Document|some.url]
- duplicates
-
KAFKA-286 Mongo sink connector must tolerate the ErrantRecordReporter being not available
- Closed