-
Notifications
You must be signed in to change notification settings - Fork 28
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Moved contents from old docs.axoniq.io here to be published by antora…
… to library.axoniq.io
- Loading branch information
Showing
6 changed files
with
395 additions
and
1 deletion.
There are no files selected for viewing
139 changes: 139 additions & 0 deletions
139
docs/old-reference-guide/modules/ROOT/pages/consuming.adoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,139 @@ | ||
:navtitle: Consuming Events From Kafka | ||
= Consuming Events from Kafka | ||
|
||
Event messages in an Axon application can be consumed through either a Subscribing or a Tracking xref:axon_framework_old_ref:events:event-processors/README.adoc[Event Processor]. Both options are maintained when it comes to consuming events from a Kafka topic, which from a set-up perspective translates to a xref:#subscribable-message-source[SubscribableMessageSource] or a xref:#streamable-messasge-source[StreamableKafkaMessageSource] respectively, Both will be described in more detail later on, as we first shed light on the general requirements for event consumption in Axon through Kafka. | ||
|
||
Both approaches use a similar mechanism to poll events with a Kafka `Consumer`, which breaks down to a combination of a `ConsumerFactory` and a `Fetcher`. The extension provides a `DefaultConsumerFactory`, whose sole requirement is a `Map` of configuration properties. The `Map` contains the settings to use for the Kafka `Consumer` client, such as the Kafka instance locations. Please check the link:https://kafka.apache.org/[Kafka documentation,window=_blank,role=external] for the possible settings and their values. | ||
|
||
[source,java] | ||
---- | ||
public class KafkaEventConsumptionConfiguration { | ||
// ... | ||
public ConsumerFactory<String, byte[]> consumerFactory(Map<String, Object> consumerConfiguration) { | ||
return new DefaultConsumerFactory<>(consumerConfiguration); | ||
} | ||
// ... | ||
} | ||
---- | ||
|
||
It is the `Fetcher` instance's job to retrieve the actual messages from Kafka by directing a `Consumer` instance it receives from the message source. You can draft up your own implementation or use the provided `AsyncFetcher` to this end. The `AsyncFetcher` doesn't need to be explicitly started, as it will react on the message source starting it. It does need to be shut down, to ensure any thread pool or active connections are properly closed. | ||
|
||
[source,java] | ||
---- | ||
public class KafkaEventConsumptionConfiguration { | ||
// ... | ||
public Fetcher<?, ?, ?> fetcher(long timeoutMillis, | ||
ExecutorService executorService) { | ||
return AsyncFetcher.builder() | ||
.pollTimeout(timeoutMillis) // Defaults to "5000" milliseconds | ||
.executorService(executorService) // Defaults to a cached thread pool executor | ||
.build(); | ||
} | ||
// ... | ||
} | ||
---- | ||
|
||
[[subscribable-message-source]] | ||
== Consuming Events with a subscribable message source | ||
|
||
Using the `SubscribableKafkaMessageSource` means you are inclined to use a `SubscribingEventProcessor` to consume the events in your event handlers. | ||
|
||
When using this source, Kafka's idea of pairing `Consumer` instances into "Consumer Groups" is used. This is strengthened by making the `groupId` upon source construction a _hard requirement_. To use a common `groupId` essentially means that the event-stream-workload can be shared on Kafka's terms, whereas a `SubscribingEventProcessor` typically works on its own accord regardless of the number of instances. The workload sharing can be achieved by having several application instances with the same `groupId` or by adjusting the consumer count through the `SubscribableKafkaMessageSource` builder. The same benefit holds for xref:axon_framework_old_ref:events:event-processors/streaming.adoc#replaying-events[resetting] an event stream, which in Axon is reserved to the `TrackingEventProcessor`, but is now opened up through Kafka's own API's. | ||
|
||
Although the `SubscribableKafkaMessageSource` thus provides the niceties the tracking event processor normally provides, it does come with two catches: | ||
|
||
. Axon's approach of the `SequencingPolicy` to deduce which thread receives which events is entirely lost. It is thus dependent on which topic-partition pairs are given to a `Consumer` for the events your handlers receives. | ||
From a usage perspective this means event message ordering is no longer guaranteed by Axon. | ||
It is thus the user's job to ensure events are published in the right topic-partition pair. | ||
|
||
. The API Axon provides for resets is entirely lost, since this API can only be correctly triggered through the `TrackingEventProcessor#resetTokens` operation | ||
|
||
Due to the above it is recommended the user is knowledgeable about Kafka's specifics on message consumption. | ||
|
||
When it comes to configuring a `SubscribableKafkaMessageSource` as a message source for a `SubscribingEventProcessor`, there is one additional requirement beside source creation and registration. The source should only start with polling for events as soon as all interested subscribing event processors have been subscribed to it. To ensure the `SubscribableKafkaMessageSource#start()` operation is called at the right point in the configuration lifecycle, the `KafkaMessageSourceConfigurer` should be utilized: | ||
|
||
[source,java] | ||
---- | ||
public class KafkaEventConsumptionConfiguration { | ||
// ... | ||
public KafkaMessageSourceConfigurer kafkaMessageSourceConfigurer(Configurer configurer) { | ||
KafkaMessageSourceConfigurer kafkaMessageSourceConfigurer = new KafkaMessageSourceConfigurer(); | ||
configurer.registerModule(kafkaMessageSourceConfigurer); | ||
return kafkaMessageSourceConfigurer; | ||
} | ||
public SubscribableKafkaMessageSource<String, byte[]> subscribableKafkaMessageSource(List<String> topics, | ||
String groupId, | ||
ConsumerFactory<String, byte[]> consumerFactory, | ||
Fetcher<String, byte[], EventMessage<?>> fetcher, | ||
KafkaMessageConverter<String, byte[]> messageConverter, | ||
int consumerCount, | ||
KafkaMessageSourceConfigurer kafkaMessageSourceConfigurer) { | ||
SubscribableKafkaMessageSource<String, byte[]> subscribableKafkaMessageSource = SubscribableKafkaMessageSource.<String, byte[]>builder() | ||
.topics(topics) // Defaults to a collection of "Axon.Events" | ||
.groupId(groupId) // Hard requirement | ||
.consumerFactory(consumerFactory) // Hard requirement | ||
.fetcher(fetcher) // Hard requirement | ||
.messageConverter(messageConverter) // Defaults to a "DefaultKafkaMessageConverter" | ||
.consumerCount(consumerCount) // Defaults to a single Consumer | ||
.build(); | ||
// Registering the source is required to tie into the Configurers lifecycle to start the source at the right stage | ||
kafkaMessageSourceConfigurer.registerSubscribableSource(configuration -> subscribableKafkaMessageSource); | ||
return subscribableKafkaMessageSource; | ||
} | ||
public void configureSubscribableKafkaSource(EventProcessingConfigurer eventProcessingConfigurer, | ||
String processorName, | ||
SubscribableKafkaMessageSource<String, byte[]> subscribableKafkaMessageSource) { | ||
eventProcessingConfigurer.registerSubscribingEventProcessor( | ||
processorName, | ||
configuration -> subscribableKafkaMessageSource | ||
); | ||
} | ||
// ... | ||
} | ||
---- | ||
|
||
The `KafkaMessageSourceConfigurer` is an Axon `ModuleConfiguration` which ties in to the start and end lifecycle of the application. It should receive the `SubscribableKafkaMessageSource` as a source which should start and stop. The `KafkaMessageSourceConfigurer` instance itself should be registered as a module to the main `Configurer`. | ||
|
||
If only a single subscribing event processor will be subscribed to the Kafka message source, `SubscribableKafkaMessageSource.Builder#autoStart()` can be toggled on. This will start the `SubscribableKafkaMessageSource` upon the first subscription. | ||
|
||
[[streamable-messasge-source]] | ||
== Consuming Events with a streamable message source | ||
|
||
Using the `StreamableKafkaMessageSource` means you are inclined to use a `TrackingEventProcessor` to consume the events in your event handlers. | ||
|
||
Whereas the xref:subscribable-message-source[subscribable Kafka message] source uses Kafka's idea of sharing the workload through multiple `Consumer` instances in the same "Consumer Group", the streamable approach doesn't use a consumer group, and assigns all available partitions. | ||
|
||
[source,java] | ||
---- | ||
public class KafkaEventConsumptionConfiguration { | ||
// ... | ||
public StreamableKafkaMessageSource<String, byte[]> streamableKafkaMessageSource(List<String> topics, | ||
ConsumerFactory<String, byte[]> consumerFactory, | ||
Fetcher<String, byte[], KafkaEventMessage> fetcher, | ||
KafkaMessageConverter<String, byte[]> messageConverter, | ||
int bufferCapacity) { | ||
return StreamableKafkaMessageSource.<String, byte[]>builder() | ||
.topics(topics) // Defaults to a collection of "Axon.Events" | ||
.consumerFactory(consumerFactory) // Hard requirement | ||
.fetcher(fetcher) // Hard requirement | ||
.messageConverter(messageConverter) // Defaults to a "DefaultKafkaMessageConverter" | ||
.bufferFactory( | ||
() -> new SortedKafkaMessageBuffer<>(bufferCapacity)) // Defaults to a "SortedKafkaMessageBuffer" with a buffer capacity of "1000" | ||
.build(); | ||
} | ||
public void configureStreamableKafkaSource(EventProcessingConfigurer eventProcessingConfigurer, | ||
String processorName, | ||
StreamableKafkaMessageSource<String, byte[]> streamableKafkaMessageSource) { | ||
eventProcessingConfigurer.registerTrackingEventProcessor( | ||
processorName, | ||
configuration -> streamableKafkaMessageSource | ||
); | ||
} | ||
// ... | ||
} | ||
---- | ||
|
||
Note that as with any tracking event processor, the progress on the event stream is stored in a `TrackingToken`. Using the `StreamableKafkaMessageSource` means a `KafkaTrackingToken` containing topic-partition to offset pairs is stored in the `TokenStore`. If no other `TokenStore` is provided, and auto-configuration is used, a `KafkaTokenStore` will be set instead of an `InMemoryTokenStore`. The `KafkaTokenStore` by default uses the `__axon_token_store_updates` topic. This should be a compacted topic, which should be created and configured automatically. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
:navtitle: Kafka Extension Guide | ||
= Kafka Extension | ||
|
||
Apache Kafka is a popular system for publishing and consuming events. Its architecture is fundamentally different from most messaging systems and combines speed with reliability. | ||
|
||
Axon provides an extension dedicated to _publishing_ and _receiving_ event messages from Kafka. The Kafka Extension should be regarded as an alternative approach to distributing events, besides (the default) Axon Server. It's also possible to use the extension to stream events from Kafka to Axon server, or the other way around. | ||
|
||
The implementation of the extension can be found link:https://github.com/AxonFramework/extension-kafka[here,window=_blank,role=extenral]. The shared repository also contains a link:https://github.com/AxonFramework/extension-kafka/tree/master/kafka-axon-example[sample project,window=_blank,role=extenral] using the extension. | ||
|
||
To use the Kafka Extension components from Axon, make sure the `axon-kafka` module is available on the classpath. Using the extension requires setting up and configuring Kafka following your project's requirements. How this is achieved is outside of the scope of this reference guide and should be found in link:https://kafka.apache.org/[Kafka's documentation,window=_blank,role=extenral]. | ||
|
||
NOTE: Note that Kafka is a perfectly fine event distribution mechanism, but it is not an event store. Along those lines this extension only provides the means to distributed Axon's events through Kafka. Due to this the extension cannot be used to event source aggregates, as this requires an event store implementation. We recommend using a built-for-purpose event store like link:https://www.axoniq.io/products/axon-server[Axon Server,window=_blank,role=extenral], or alternatively an RDBMS based (the JPA or JDBC implementations for example). | ||
|
||
|
36 changes: 36 additions & 0 deletions
36
docs/old-reference-guide/modules/ROOT/pages/message-format.adoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
:navtitle: Customizing Event Message Format | ||
= Customizing Event Message Format | ||
|
||
In the previous sections the `KafkaMessageConverter<K, V>` has been shown as a requirement for event production and consumption. The `K` is the format of the message's key, where the `V` stand for the message's value. The extension provides a `DefaultKafkaMessageConverter` which converts an axon `EventMessage` to a Kafka `ProducerRecord`, and an `ConsumerRecord` back into an `EventMessage`. This `DefaultKafkaMessageConverter` uses `String` as the key and `byte[]` as the value of the message to de-/serialize. | ||
|
||
Albeit the default, this implementation allows for some customization, such as how the `EventMessage` `MetaData` is mapped to Kafka headers. This is achieved by adjusting the "header value mapper" in the `DefaultKafkaMessageConverter` builder. | ||
|
||
The `SequencingPolicy` can be adjusted to change the behaviour of the record key being used. The default sequencing policy is the `SequentialPerAggregatePolicy`, which leads to the aggregate identifier of an event being the key of a `ProducerRecord` and `ConsumerRecord`. | ||
|
||
The format of an event message defines an API between the producer and the consumer of the message. This API may change over time, leading to incompatibility between the event class' structure on the receiving side and the event structure of a message containing the old format. Axon addresses the topic of xref:axon_framework_old_ref:events:event-versioning.adoc[Event Versioning] by introducing Event Upcasters. The `DefaultKafkaMessageConverter` supports this by provisioning an `EventUpcasterChain` and run the upcasting process on the `MetaData` and `Payload` of individual messages converted from `ConsumerRecord` before those are passed to the `Serializer` and converted into `Event` instances. | ||
|
||
Note that the `KafkaMessageConverter` feeds the upcasters with messages one-by-one, limiting it to one-to-one or one-to-many upcasting only. Upcasters performing a many-to-one or many-to-many operation thus won't be able to operate inside the extension (yet). | ||
|
||
Lastly, the `Serializer` used by the converter can be adjusted. See the xref:axon_framework_old_ref:ROOT:serialization.adoc[Serializer] section for more details on this. | ||
|
||
[source,java] | ||
---- | ||
public class KafkaMessageConversationConfiguration { | ||
// ... | ||
public KafkaMessageConverter<String, byte[]> kafkaMessageConverter(Serializer serializer, | ||
SequencingPolicy<? super EventMessage<?>> sequencingPolicy, | ||
BiFunction<String, Object, RecordHeader> headerValueMapper, | ||
EventUpcasterChain upcasterChain) { | ||
return DefaultKafkaMessageConverter.builder() | ||
.serializer(serializer) // Hard requirement | ||
.sequencingPolicy(sequencingPolicy) // Defaults to a "SequentialPerAggregatePolicy" | ||
.upcasterChain(upcasterChain) // Defaults to empty upcaster chain | ||
.headerValueMapper(headerValueMapper) // Defaults to "HeaderUtils#byteMapper()" | ||
.build(); | ||
} | ||
// ... | ||
} | ||
---- | ||
|
||
Make sure to use an identical `KafkaMessageConverter` on both the producing and consuming end, as otherwise exception upon deserialization should be expected. A `CloudEventKafkaMessageConverter` is also available using the link:https://cloudevents.io/[Cloud Events spec,window=_blank,role=external]. | ||
|
Oops, something went wrong.