I'm searching a way to test a Kafka Streams application. So that I can define the input events and the test suite shows me the output.
Is this possible without a real Kafka setup?
I'm searching a way to test a Kafka Streams application. So that I can define the input events and the test suite shows me the output.
Is this possible without a real Kafka setup?
As you are asking if it is possible to test Kafka Streams application without a real Kafka setup, you might try this Mocked Streams library in Scala. Mocked Streams 1.0 is a library for Scala >= 2.11.8 which allows you to unit-test processing topologies of Kafka Streams applications (since Apache Kafka >=0.10.1) without Zookeeper and Kafka Brokers. Reference: https://github.com/jpzk/mockedstreams
You can also use scalatest-embedded-kafka which is a library that provides an in-memory Kafka broker to run your ScalaTest specs against. It uses Kafka 0.10.1.1 and ZooKeeper 3.4.8.
Reference: https://github.com/manub/scalatest-embedded-kafka#scalatest-embedded-kafka-streams
Good luck!
You should check Kafka Unit here.
Your test setup should look something like this:
And then to read your messages and assert that everything went ok you do something like this:
This actually spins up an embedded kafka that helps you have everything you need contained in a test.
You could get a little bit fancier and setup your embedded kafka as a
setup()
method (orsetupSpec()
in Spock) and stop your embedded kafka in ateardown()
.If you want to test a
Kafka Stream
Topology that usesProcessor API
, the code provided by Dmitry may not work properly. So after a few hours of researching in the Javadocs and official docs I came out with a working code in order to test a custom processor that you have implemented usingJUnit
.you can use https://github.com/jpzk/mockedstreams see the example below...
hope this helps you...
Update Kafka 1.1.0 (released 23-Mar-2018):
KIP-247 added official test utils. Per the Upgrade Guide:
From the documentation:
The test driver simulates the library runtime that continuously fetches records from input topics and processes them by traversing the topology. You can use the test driver to verify that your specified processor topology computes the correct result with the manually piped in data records. The test driver captures the results records and allows to query its embedded state stores:
See the documentation for details.
ProcessorTopologyTestDriver
is available as of 0.11.0.0. It is available in thekafka-streams
test artifact (specified with<classifier>test</classifier>
in Maven):You will also need to add the
kafka-clients
test artifact:Then you can use the test driver. Per the Javadoc, first create a
ProcessorTopologyTestDriver
:You can feed input into the topology as though you had actually written to one of the input topics:
And read output topics:
Then you can assert on these results.
You can just run a single Zookeeper and broker locally to test a Kafka Streams application.
Just follow those quick start guides:
Also check out this Kafka Streams examples (with detailed walk through instructions in the JavaDocs):