Streamparse/Python - custom fail() method not working for error tuples

270 Views Asked by At

I'm using Storm to process messages off of Kafka in real-time and using streamparse to build my topology. For this use case, it's imperative that we have 100% guarantee that any message into Storm is processed and ack'd. I have implemented logic on my bolt using try/catch (see below), and I would like to have Storm replay these messages in addition to writing this to another "error" topic in Kafka.

In my KafkaSpout, I assigned the tup_id to equal the offset id from the Kafka topic that my consumer is feeding from. However, when I force an error in my Bolt using a bad variable reference, I'm not seeing the message be replayed. I am indeed seeing one write to the 'error' Kafka topic, but only once--meaning that the tuple is never being resubmitted into my bolt(s). My setting for the TOPOLOGY_MESSAGE_TIMEOUT_SEC=60 and I'm expecting Storm to keep replaying the failed message once every 60 seconds and have my error catch keep writing to the error topic, perpetually.

KafkaSpout.py

class kafkaSpout(Spout):

    def initialize(self, stormconf, context):

        self.kafka = KafkaClient(str("host:6667"))#,offsets_channel_socket_timeout_ms=60000)
        self.topic = self.kafka.topics[str("topic-1")]
        self.consumer = self.topic.get_balanced_consumer(consumer_group=str("consumergroup"),auto_commit_enable=False,zookeeper_connect=str("host:2181"))

    def next_tuple(self):
        for message in self.consumer:
            self.emit([json.loads(message.value)],tup_id=message.offset)
            self.log("spout emitting tuple ID (offset): "+str(message.offset))
            self.consumer.commit_offsets()

    def fail(self, tup_id):
        self.log("failing logic for consumer. resubmitting tup id: ",str(tup_id))
        self.emit([json.loads(message.value)],tup_id=message.offset)

processBolt.py

class processBolt(Bolt):

  auto_ack = False
  auto_fail = False

  def initialize(self, conf, ctx):
      self.counts = Counter()
      self.kafka = KafkaClient(str("host:6667"),offsets_channel_socket_timeout_ms=60000)
      self.topic = self.kafka.topics[str("topic-2")]
      self.producer = self.topic.get_producer()

      self.failKafka = KafkaClient(str("host:6667"),offsets_channel_socket_timeout_ms=60000)
      self.failTopic = self.failKafka.topics[str("topic-error")]
      self.failProducer = self.failTopic.get_producer()


  def process(self, tup):
      try:
          self.log("found tup.")
          docId = tup.values[0]
          url = "solrserver.host.com/?id="+str(docId)

          thisIsMyForcedError = failingThisOnPurpose ####### this is what im using to fail my bolt consistent

          data = json.loads(requests.get(url).text)

          if len(data['response']['docs']) > 0:
              self.producer.produce(json.dumps(docId))
              self.log("record FOUND {0}.".format(docId))

          else:
              self.log('record NOT found {0}.'.format(docId)) 

          self.ack(tup)

      except:
          docId = tup.values[0]
          self.failProducer.produce( json.dumps(docId), partition_key=str("ERROR"))
          self.log("TUP FAILED IN PROCESS BOLT: "+str(docId))
          self.fail(tup)

I would appreciate any help with how to correctly implement the custom fail logic for this case. Thanks in advance.

0

There are 0 best solutions below