I'm investigating the use of event bridge as we are currently using Kafka and we want to confirm it's the best fit for us.
However, I haven't used event bridge before and I'm a little confused with its Rules and target limits.
I've seen advise suggesting you should use a separate rule for each target. However, there is a limit of 5 targets per rule and 300 rules per event bus.
So my question is given these limits how would you create a system where there might be 100's of consumers for the same event?
Do you need to create your own event fan out mechanism with Lambda for example?
Disclosure: EventBridge PM w/ Kafka background here.
Can you please describe your current Kafka setup e.g., how many brokers/topics and consumers/consumer groups per topic?
You are correct, the (soft) limit of rules (per bus) and max 5 targets per rule today pose a limit on the number of direct targets you can have per rule/bus. However, it also depends on the overall event-driven architecture and might not be a concern depending on how you design the system e.g., using multiple buses/accounts (see https://github.com/aws-samples/amazon-eventbridge-resource-policy-samples). We also added the capability to auto-increase the limits, and today, requests of up to 2000 rules per bus are typically approved automatically (see https://aws.amazon.com/about-aws/whats-new/2023/02/amazon-eventbridge-event-buses-enhanced-integration-aws-service-quotas/?nc1=h_ls).
For high fan-out scenarios (thousands to millions of targets) an option is to use EventBridge with SNS (or potentially adding SQS to the mix) as target, while still benefiting from EventBridge features, such as easy payload filtering and routing, strong IAM controls for publishers/subscribers, centralized event handling, etc.
Let me know if this helps or you want to discuss further.