MuleSoft Integration With Seek Kafka Connector

We know MuleSoft provides various option to integrate with different platforms like salesforce, database, AWS, Azure and one among available options is Kafka too.

When I started my journey of integrating Kafka in MuleSoft using a seek connector, it was hard to find any relative information on the internet for reference😕. Hence here I am summarizing the possible information that might help. By the way for beginners, Kafka is streaming platform, it can be used to consume same events across different application with unique consumer group Id. 

So, letsssss start to understand and fly like a Dragon........💃

mytechworld

We have following connectors provided by MuleSoft to integrate with Kafka. 

  • Publish
  • Consume
  • Message listener
  • Batch message listener 
  • Commit
  • The Star of this blog SEEK 
In this blog the focus of light is on seek. 

For the seek connector you can define the offset and partition that you want to start consuming from, irrespective of the event state if that was consumed and acknowledged before. Seek suits best in scenarios where we want to consume the same message multiple times even though it was acknowledged in previous request, but its necessary to follow necessary steps to avoid annoying issues🙇 

It won't be able to establish the connectivity directly to the Kafka topic, but you can achieve this by using consume connector before seek. 
Don't forget 🙅
  • To wrap the consume connector in try catch block with on error continue else it will throw an exception at runtime. 
  • Pass the offset and partition value as number to seek connector. 
There you go, once the connectivity is established your application is ready to consume the message from defined offset. But unfortunately the output of seek connector is not the target event. Hence to actually retrieve the event payload from Kafka topic you need another consume connector. The flow looks like as below. 



That's it, after implementing these flows you are good to consume the event as many times as you want by passing the respective partition and offset value. 

The common challenge that you might face will be seek connector resulting in timeout, even though after providing all the connectivity details correctly. The secret behind the issue is 🤔🤔

Yes, by defining the proper acknowledgment mode for your consume connector. Preferred one is AUTO, it worked in my case😝. 

Best of luck with your implementation and please provide your valuable feedback!!

#myFirstBlog💫 #myMacBookAirM3 #midNightBlue💙


Comments

Popular posts from this blog

Parallel Processing of Kafka events with order in MuleSoft