Error Channel
Last updated
Was this helpful?
Last updated
Was this helpful?
Error Channel
Ecotone
comes with solution called Error Channel.
Error Channel is a place where unrecoverable Errors can go, this way we can preserve Error Messages even if we can't handle them anyhow at given moment.
Error Channel may log those Messages, store them in database, push them to some Asynchronous Channel, it all depends on what Handler we will connect to the Error Channel.
On the high level Error Channel works as follows:
Message Consumer is polling Messages from the Queue and executing related Message Handlers.
When execution of given Handler fails, Error is propagated back to Message Consumer
Message Consumer based on the configuration sends it to related Error Channel
Error Channel can be configured per Message Consumer, or globally as default Error Channel for all Message Consumers:
- Symfony
- Laravel
- Lite
config/packages/ecotone.yaml
ecotone:
defaultErrorChannel: "errorChannel"
class Configuration
{
#[ServiceContext]
public function configuration() : array
{
return [
// For Message Consumer orders, configure error channel
PollingMetadata::create("orders")
->setErrorChannelName("errorChannel")
];
}
}
To handle incoming Error Messages, we can bind to our defined Error Channel using ServiceActivator:
#[InternalHandler("errorChannel")]
public function handle(ErrorMessage $errorMessage): void
{
// handle exception
$exception = $errorMessage->getExceptionMessage();
}
Ecotone provides inbuilt retry mechanism, in case of failure Error Message will be resent to its original Message Channel with a delay. This way we will give application a chance to self-heal and return to good state.
To configure Delayed Retries we need to set up Error Configuration and connect it to our Error Channel:
#[ServiceContext]
public function errorConfiguration()
{
return ErrorHandlerConfiguration::create(
"errorChannel",
RetryTemplateBuilder::exponentialBackoff(1000, 10)
->maxRetryAttempts(3)
);
}
If for some cases we want to discard Error Messages, we can set up error channel to default inbuilt one called "nullChannel". That may be used in combination of retries, if after given attempt Message is still not handled, then discard:
#[ServiceContext]
public function errorConfiguration()
{
return ErrorHandlerConfiguration::createWithDeadLetterChannel(
"errorChannel",
RetryTemplateBuilder::exponentialBackoff(1000, 10)
->maxRetryAttempts(3),
// if retry strategy will not recover, then discard
"nullChannel"
);
}
Ecotone comes with full support for managing full life cycle of a error message. This allows us to store Message in database for later review. Then we can review the Message, replay it or delete.
Read more in next section.
Dead Letter can be combined with Delayed Retries, to store only Error Messages that can't self-heal. Read more in related section.
Ecotone provides ability to define Error Channel on the level of Command Bus. This way we can handle synchronous Command Handler execution failures with grace. For example we may receive webhooks and in case of failure instead of throwing exception, we can store related Message in Dead Letter for later review, or push it for reprocessing on asynchronous channel. This way we can ensure stability of our system, even for synchronous scenarios like HTTP integrations.
Command Bus Error Channel is available as part of Ecotone Enterprise.
To set up Error Channel for Command Bus, we will extend Command Bus with our Interface and add ErrorChannel attribute.
#[ErrorChannel("dbal_dead_letter")]
interface ResilientCommandBus extends CommandBus
{
}
Now instead of using CommandBus, we will be using ResilientCommandBus for sending Commands. Whenever failure will happen, instead being propagated, it will now will be redirected to our Dead Letter and stored in database for later review.
We can extend our Command Bus with Error Channel by providing instant retries. This way we can do automatic retries before we will consider Message as failed and move it to the Error Channel. This way we give ourselves a chance of self-healing automatically in case of transistent errors, like database or network exceptions.
#[InstantRetry(retryTimes: 2)]
#[ErrorChannel("dbal_dead_letter")]
interface ResilientCommandBus extends CommandBus
{
}
Now instead of using CommandBus, we will be using ResilientCommandBus for sending Commands. Whenever failure will happen, instead being propagated, it will now will be redirected to our Dead Letter and stored in database for later review.
Instead of pushing Message to Error Channel, we can push it to Asynchronous Message Channel from which Message will be consumed and retried again. This way in case of failure we can make it possible for Message to be retried and end up self-healing.
#[ErrorChannel("async_channel")]
interface ResilientCommandBus extends CommandBus
{
}
and then for use RabbitMQ Message Channel:
final readonly class EcotoneConfiguration
{
#[ServiceContext]
public function databaseChannel()
{
return AmqpBackedMessageChannelBuilder::create('orders');
}
}
It's good practice to use different Message Channel implementation than the storage used during process the Message. For example if our processing requires database connection and our database went down, then if our configured channel is RabbitMQ channel, then we will be able to push those Messages into the Queue instead of failing.
We can combine Asynchronous Error Channel together with delayed retries, creating robust solution, that our Application is able to self-heal from transistent errors even if they take some period of time. For example if our calling some external Service fails, or database went down, then we may receive the same error when Message is retrieved by Async Channel. However if we will delay that by 20 seconds, then there is huge chance that everything will get back on track, and the Application will self-heal automatically.
Command Bus configuration:
#[ErrorChannel("async_channel")]
interface ResilientCommandBus extends CommandBus
{
}
And delayed retry configuration:
#[ServiceContext]
public function errorConfiguration()
{
return ErrorHandlerConfiguration::create(
"async_channel",
RetryTemplateBuilder::exponentialBackoff(1000, 10)
->maxRetryAttempts(3)
);
}
Of course we could add Dead Letter channel for our delayed retries configuration. Closing the full flow, that even if in case delayed retries failed, we will end up with Message in Dead Letter.