Diagnostic logging ‘events’ and benefits of decoupled log-processors by Jang-Vijay Singh
Blog: PaaS Community
The implementation of good logging frameworks is typically asynchronous for good reason: background processing involved in logging activities must not cause an overhead in the main flow. Even the simplest action like writing the output to a log file is implemented behind the scenes in separate threads.
From a log analysis and diagnostics point of view, this is not a problem as each log entry includes a timestamp that shows the instant when the log entry was requested (rather than the time it was actually written to a log file)
The same principle applies when we use more complex technologies like the Oracle Service Bus and Oracle Integration Cloud (OIC). Each offers dedicated log activities that write to *-diagnostic.log files or the OIC activity stream.
More than once, I came across some customer requests where they proposed writing dedicated services to perform something more complex than just writing to log files or activity streams. Customer would perhaps expect structured log entries in a specific format to be published to some queue or persisted to some big-data store. It is proposed that such a dedicated services/API’s that would then be called by each integration flow or process at different points like entry, exit, and error catch blocks.
However this has two clear drawbacks:
1) It involves a design-time and development time overhead where this new custom ‘logging API’ would be called by each integration flow (we then need to worry about its availability and error handling in addition to the actual real services we need to worry about). Read the complete article here.
For regular information on Oracle PaaS become a member in the PaaS (Integration & Process) Partner Community please register here.