Which design (using OOP) is better? Externalized or internalized logging? - oop

I need to verify log information(error messages) along with the result set. Here logging can be also understood as report generation in my case.
Externalized Logging
Should I store the log messages(for any errors) along with result and do the logging after the business logic step
Advantages:
This gives me the log information that I can use to verify negative cases during unit testing, versus parsing the log file.
Separate out the logging from business logic.
I can implement logging as a separate feature, where I can log in different formats based on implementation (HTML, JSON, etc)
Disadvantages
This will have code duplication, as I end up with the same loops for logging as for the computation of the result set.
During the logging phase the parent will have to fetch the child info. And storing all this info makes it complex and unreadable.
Internalized Logging
Should I do the logging at the same time as I perform business logic
Advantages
Not storing any information and simplifying the solution and effectively passing the context of the parent objects to the child object,
Logging when exception occurs.
Disadvantages
But not able to separate logging/reporting from business logic.
I will not get a log information to verify negative cases for unit-tests. So I will need to parse the log file to verify.
More context below:
I am building this tool for comparison of properties in two resources that can be of type - JSON, properties, VMs, REST API, etc in Python.
The tool reads a metadata json having a structure like follows:
{
"run-name": "Run Tests"
"tests": [
{
"name": "Test 1",
"checks":[
{
"name": "Dynamic Multiple",
"type": "COMPARE",
"dynamic": [
{
"file": "source.json",
"type": "JSON",
"property": "topology.wlsClusters.[].clusterName"
}
],
"source": {
"file": "source.json",
"type": "JSON",
"property": "topology.wlsClusters.[clusterName == ${1}].Xms"
},
"target": {
"file": "target.properties",
"type": "PROPERTY",
"property": "fusion.FADomain.${1}.default.minmaxmemory.main",
"format": "-Xms{}?"
}
},
]
}
]
}
The above JSON tells my tool to:
Fetch 'clusterName' from each wlsCluster object in topology.wlsClusters. This gives a list of 'clusterNames'.
From 'source.json' fetch Xms values from each wlsCluster object where 'clusterName' belongs to the above list.
Similarly fetch a all Xms values from target.properties file using the above list.
Compare each value from source Xms list to target Xmx list.
If all match then SUCCESS else FAILURE.
Intuitively above JSON can be mapped to its corresponding objects as:
Test
Check
Resource
Now ideally I know I should be doing following steps:
Run all tests, and all checks in each test.
For each test if type is compare
Read and compute 'dynamic' values
Read 'source' and replace dynamic values in property field and fetch the corresponding properties
Similarly read 'target' and fetch the corresponding properties
Compare and return 'PASSED' or 'FAILED'
So broadly I have these steps:
FETCH and STORE VALUES.
COMPARE VALUES
I also want to print logs in following format.
[<TIMESTAMP> <RUN-NAME> <TEST-NAME> <CHECK-NAME> <ERROR-LEVEL> <MESSAGE-TYPE> <RESOURCE-NAME>] custom-msg
where
ERROR-TYPE: INFO DEBUG etc
MESSAGE-TYPE: COMPARE, SYNATAX-ERROR, MISSING-PROPERTY etc
Now if I follow the above object model, and each object is responsible for handling it's own logging, it would not have all this information. So I need to either:
pass this information down to the child objects,
or have the parent read the information of the child object.
I prefer the second approach as then I can store the results of fetch and delay the logging (if any) to after comparison. In this way I can also run validations (unit-tests) as I validate the error message(negative scenario) as well.
But this where my solution is getting complicated.
I need to store the results of fetch in each object, which can be the value found or 'None' when no value is found. Now I need to store the error type and error message as well when no value is found. Lets call this class Value.
Each Property can have produce a list of such Value.
Each Resource can produce a list of such Property.
Each Check can produce a list of such Resource.
NOTE: This is developed in Python. (If it matters to you.)

Each class should be responsible of it's own state. When you let classes make decisions based on properties in other classes you will end up with spagetti code eventually.
Code like if (test.check.resource.AProperty == aValue) is a clear indication that your spagetti have started cooking.
In this case you want to log at all in the classes. You want o decide whether a sequence of actions were completed successfully or not. And as a consequence of that, you want to log the result.
With that in mind, don't let the classes log at all, but only report what they tested/checked and the result of that.
A common approach is to supply a context object which are used to receive the result.
Here is some c# code to illustrate (i don't know python well enough):
interface VerifierContext
{
void AddSuccess(string checkName, string resourceName, string message);
void AddFailure(string checkName, string resourceName, SomeEnum failureType, string message);
}
public class SomeChecker
{
public void Validate(VerifierContext context)
{
context.AddFailure("CompanyNameLength", "cluster.Company", SomeEnum.LengthRestriction, "Company name was 30chars, can only be 10");
}
}
That will give you a flat list of validations. If you want to get nested you can add Enter/Exit methods:
public class SomeChecker
{
public void Validate(VerifierContext context)
{
context.Enter("CompanyValidations");
foreach (var validator in _childValidators)
validator.Validate(context);
context.Exit("CompanyValidations");
}
}
You can of course design it in a lot of different ways. My main point is that each class in your check/parser should just make a decision on if everything went OK or not. It should not decide how things should be logged.
The class that triggers the work can then go through all result and choose log level depending on the errorType etc.
All classes are also easily tested since they only depend on the context.

Related

Raku Cro service subscribing to data "in the background" general guidance

I am attempting to put together a Cro service that has a react/whenever block consuming data "in the background" So unlike many examples of websocket usage with Cro, this has nothing to do with routes that may be accessed via the browser.
My use case is to consume message received via an MQTT topic and do some processing with them. At a later stage in development I might create a supply out of this data, but for now, when data is received it will be stored in a variable and dependant on certain conditions, be sent to another service via a http post.
My thought was to include a provider() in the Cro::HTTP::Server setup like so:
use Cro::HTTP::Log::File;
use Cro::HTTP::Server;
use Routes;
use DataProvider; # Here
my Cro::Service $http = Cro::HTTP::Server.new(
http => <1.1>,
host => ...,
port => ...,
application => [routes(), provider()], # Made this into an array of subs?
after => [
Cro::HTTP::Log::File.new(logs => $*OUT, errors => $*ERR)
]
);
And in the DataProvider.pm6:
use MQTT::Client;
sub provider() is export {
my $mqtt = MQTT::Client.new: server => 'localhost';
react {
whenever $mqtt.subscribe('some/mqtt/topic') {
say "+ topic: { .<topic> } => { .<message>.decode("utf8-c8") }";
}
}
}
This throws a bunch of errors:
A react block:
in sub provider at DataProvider.pm6 (DataProvider) line 5
in block <unit> at service.p6 line 26
Died because of the exception:
Invocant of method 'write' must be an object instance of type
'IO::Socket::Async', not a type object of type 'IO::Socket::Async'. Did
you forget a '.new'?
in method subscribe at /home/cam/raku/share/perl6/site/sources/42C762836A951A1C11586214B78AD34262EC465F (MQTT::Client) line 133
in sub provider at DataProvider.pm6 (DataProvider) line 6
in block <unit> at service.p6 line 26
To be perfectly honest, I am totally guessing that this is how I would approach the need to subscribe to data in the background of a Cro service, but I was not able to find any information on what might be considered the recommended approach.
Initially I had my react/whenever block in the main service.pm6 file, but that did not seem right. And needed to be wrapped in a start{} block because as I have just learned, react is blocking :) and cro was not able to actually start.
But following the pattern of how Routes are implemented seemed logical, but I am missing something. The error speaks about setting up a new method, but I'm not convinced that is the root cause. Routes.pm6 does not have a constructor.
Can anyone point me in the right direction please?
Thanks to all who have provided information, this has been a very valuable learning exercise.
The approach of passing additional sub routines, along side router() in the application parameter to Cro::HTTP::Server.new gave further trouble. (an array is not allowed, and broke routing)
Instead, I have moved the background work into a class of it's own, and given it a start and stop method more akin to Cro::HTTP::Server.
My new approach:
service.pm6
use Cro::HTTP::Log::File;
use Cro::HTTP::Server;
use Routes;
use KlineDataSubscriber; # Moved mqtt functionality here
use Database;
my $dsn = "host=localhost port=5432 dbname=act user=.. password=..";
my $dbh = Database.new :$dsn;
my $mqtt-host = 'localhost';
my $subscriber = KlineDataSubscriber.new :$mqtt-host;
$subscriber.start; # Inspired by $http.start below
my Cro::Service $http = Cro::HTTP::Server.new(
http => <1.1>,
host => ...,
port => ...,
application => routes($dbh), # Basically back the way it was originally
after => [
Cro::HTTP::Log::File.new(logs => $*OUT, errors => $*ERR)
]
);
$http.start;
say "Listening at...";
react {
whenever signal(SIGINT) {
say "Shutting down...";
$subscriber.stop;
$http.stop;
done;
}
}
And in KlineDataSubscriber.pm6
use MQTT::Client;
class KlineDataSubscriber {
has Str $.mqtt-host is required;
has MQTT::Client $.mqtt = Nil;
submethod TWEAK() {
$!mqtt = MQTT::Client.new: server => $!mqtt-host;
await $!mqtt.connect;
}
method start(Str $topic = 'act/feed/exchange/binance/kline-closed/+/json') {
start {
react {
whenever $!mqtt.subscribe($topic) {
say "+ topic: { .<topic> } => { .<message>.decode("utf8-c8") }";
}
}
}
}
method stop() {
# TODO Figure how to unsubscribe and cleanup nicely
}
}
This feels much more "Cro idiomatic" to me, but I would be happy to be corrected.
More importantly, it works as expected and I feel is somewhat future proof. I should be able to create a supply to make real-time data available to the router, and push data to any connected web clients.
I also intend to have a http GET endpoint /status with various checks to ensure everything healthy
The root cause
The error speaks about setting up a new method, but I'm not convinced that is the root cause.
It's not about setting up a new method. It's about a value that should be defined instead being undefined. That typically means a failure to attempt to initialize it, which typically means a failure to call .new.
Can anyone point me in the right direction please?
Hopefully this question helps.
Finding information on a recommended approach
I am totally guessing that this is how I would approach the need to subscribe to data in the background of a Cro service, but I was not able to find any information on what might be considered the recommended approach.
It might be helpful for you to list which of the get-up-to-speed steps you've followed from Getting started with Cro, including the basics but also the "Learn about" steps at the end.
The error message
A react block:
in sub provider ...
Died because of the exception:
...
in method subscribe ...
The error message begins with the built in react construct reporting that it caught an exception (and handled it by throwing its own exception in response). A "backtrace" corresponding to where the react appeared in your code is provided indented from the initial "A react block:".
The error message continues with the react construct summarizing its own exception (Died because ...) and explains itself by reporting the original exception, further indented, in subsequent lines. This includes another backtrace, this time one corresponding to the original exception, which will likely have occurred on a different thread with a different callstack.
(All of Raku's structured multithreading constructs[1] use this two part error reporting approach for exceptions they catch and handle by throwing another exception.)
The first backtrace indicates the react line:
in sub provider at DataProvider.pm6 (DataProvider) line 5
use MQTT::Client;
sub provider() is export {
my $mqtt = MQTT::Client.new: server => 'localhost';
react {
The second backtrace is about the original exception:
Invocant of method 'write' must be an object instance of type
'IO::Socket::Async', not a type object of type 'IO::Socket::Async'. ...
in method subscribe at ... (MQTT::Client) line 133
This reports that the write method called on line 133 of MQTT::Client requires its invocant is an instance of type 'IO::Socket::Async'. The value it got was of that type but was not an instance, but instead a "type object". (All values of non-native types are either type objects or instances of their type.).
The error message concludes with:
Did you forget a '.new'?
This is a succinct hint based on the reality that 99 times out of a hundred the reason a type object is encountered when an instance is required is that code has failed to initialize a variable. (One of the things type objects are used for is to serve the role of "undefined" in languages like Perl.)
So, can you see why something that should have been an initialized instance of 'IO::Socket::Async' is instead an uninitialized one?
Footnotes
[1] Raku's constructs for parallelism, concurrency, and asynchrony follow the structured programming paradigm. See Parallelism, Concurrency, and Asynchrony in Raku for Jonathan Worthington's video presentation of this overall approach. Structured constructs like react can cleanly observe, contain, and manage events that occur anywhere within their execution scope, including errors such as error exceptions, even if they happen on other threads.
You seem to be fine now but when I first saw this I made this https://github.com/jonathanstowe/Cro-MQTT which turns the MQTT client into a first class Cro service.
I haven't released it yet but it may be instructive.

Spring data - webflux - Chaining requests

i use reactive Mongo Drivers and Web Flux dependancies
I have a code like below.
public Mono<Employee> editEmployee(EmployeeEditRequest employeeEditRequest) {
return employeeRepository.findById(employeeEditRequest.getId())
.map(employee -> {
BeanUtils.copyProperties(employeeEditRequest, employee);
return employeeRepository.save(employee)
})
}
Employee Repository has the following code
Mono<Employee> findById(String employeeId)
Does the thread actually block when findById is called? I understand the portion within map actually blocks the thread.
if it blocks, how can I make this code completely reactive?
Also, in this reactive paradigm of writing code, how do I handle that given employee is not found?
Yes, map is a blocking and synchronous operation for which time taken is always going to be deterministic.
Map should be used when you want to do the transformation of an object /data in fixed time. The operations which are done synchronously. eg your BeanUtils copy properties operation.
FlatMap should be used for non-blocking operations, or in short anything which returns back Mono,Flux.
"how do I handle that given employee is not found?" -
findById returns empty mono when not found. So we can use switchIfEmpty here.
Now let's come to what changes you can make to your code:
public Mono<Employee> editEmployee(EmployeeEditRequest employeeEditRequest) {
return employeeRepository.findById(employeeEditRequest.getId())
.switchIfEmpty(Mono.defer(() -> {
//do something
}))
.map(employee -> {
BeanUtils.copyProperties(employeeEditRequest, employee);
return employee;
})
.flatMap(employee -> employeeRepository.save(employee));
}

Changing the GemFire query ResultSender batch size

I am experiencing a performance issue related to the default batch size of the query ResultSender using client/server config. I believe the default value is 100.
If I run a simple query to get keys (with some order by columns due to the PARTITION Region type), this default batch size causes too many chunks being sent back for even 1000 records. In my tests, even the total query time is only less than 100 ms, however, the app takes more than 10 seconds to process those chunks.
Reading between the lines in your problem statement, it seems you are:
Executing an OQL query on a PARTITION Region (PR).
Running the query inside a Function as recommended when executing queries on a PR.
Sending batch results (as opposed to streaming the results).
I also assume since you posted exclusively in the #spring-data-gemfire channel, that you are using Spring Data GemFire (SDG) to:
Execute the query (e.g. by using the SDG GemfireTemplate; Of course, I suppose you could also be using the GemFire Query API inside your Function directly, too)?
Implemented the server-side Function using SDG's Function annotation support?
And, are possibly (indirectly) using SDG's BatchingResultSender, as described in the documentation?
NOTE: The default batch size in SDG is 0, NOT 100. Zero means stream the results individually.
Regarding #2 & #3, your implementation might look something like the following:
#Component
class MyApplicationFunctions {
#GemfireFunction(id = "MyFunction", batchSize = "1000")
public List<SomeApplicationType> myFunction(FunctionContext functionContext) {
RegionFunctionContext regionFunctionContext =
(RegionFunctionContext) functionContext;
Region<?, ?> region = regionFunctionContext.getDataSet();
if (PartitionRegionHelper.isPartitionRegion(region)) {
region = PartitionRegionHelper.getLocalDataForContext(regionFunctionContext);
}
GemfireTemplate template = new GemfireTemplate(region);
String OQL = "...";
SelectResults<?> results = template.query(OQL); // or `template.find(OQL, args);`
List<SomeApplicationType> list = ...;
// process results, convert to SomeApplicationType, add to list
return list;
}
}
NOTE: Since you are most likely executing this Function "on Region", the FunctionContext type will actually be a RegionFunctionContext in this case.
The batchSize attribute on the SDG #GemfireFunction annotation (used for Function "implementations") allows you to control the batch size.
Of course, instead of using SDG's GemfireTemplate to execute queries, you can, of course, use the GemFire Query API directly, as mentioned above.
If you need even more fine grained control over "result sending", then you can simply "inject" the ResultSender provided by GemFire to the Function, even if the Function is implemented using SDG, as shown above. For example you can do:
#Component
class MyApplicationFunctions {
#GemfireFunction(id = "MyFunction")
public void myFunction(FunctionContext functionContext, ResultSender resultSender) {
...
SelectResults<?> results = ...;
// now process the results and use the `resultSender` directly
}
}
This allows you to "send" the results however you see fit, as required by your application.
You can batch/chunk results, stream, whatever.
Although, you should be mindful of the "receiving" side in this case!
The 1 thing that might not be apparent to the average GemFire user is that GemFire's default ResultCollector implementation collects "all" the results first before returning them to the application. This means the receiving side does not support streaming or batching/chunking of the results, allowing them to be processed immediately when the server sends the results (either streamed, batched/chunked, or otherwise).
Once again, SDG helps you out here since you can provide a custom ResultCollector on the Function "execution" (client-side), for example:
#OnRegion("SomePartitionRegion", resultCollector="myResultCollector")
interface MyApplicationFunctionExecution {
void myFunction();
}
In your Spring configuration, you would then have:
#Configuration
class ApplicationGemFireConfiguration {
#Bean
ResultCollector myResultCollector() {
return ...;
}
}
Your "custom" ResultCollector could return results as a stream, a batch/chunk at a time, etc.
In fact, I have prototyped a "streaming" ResultCollector implementation that will eventually be added to SDG, here.
Anyway, this should give you some ideas on how to handle the performance problem you seem to be experiencing. 1000 results is not a lot of data so I suspect your problem is mostly self-inflicted.
Hope this helps!
John,
Just to clarify, I use client/server topology(actually wan, but that is not important in here). My client is a spring boot web app which has kendo grid as ui. Users can filter/sort on any combination of the columns, which will be passed to the spring boot app for generating dynamic OQL and create the pagination. Till now, except for being dynamic, my OQL queries are quite straight forward. I do not want to introduce server side functions due to the complexity of our global deployment process. But I can if you think that is something I have to do.
Again, thanks for your answers.

Activiti BPMN - How to pass username in variables/expression who have completed task?

I am very new to Activiti BPMN. I am creating a flow diagram in activiti. I m looking for how username (who has completed the task) can be pass into shell task arguments. so that I can fetch and save in db that user who has completed that task.
Any Help would be highly appreciated.
Thanks in advance...
Here's something I prepared for Java developers based on I think a blog post I saw
edit: https://community.alfresco.com/thread/224336-result-variable-in-javadelegate
RESULT VARIABLE
Option (1) – use expression language (EL) in the XML
<serviceTask id="serviceTask"
activiti:expression="#{myService.toUpperCase(myVar)}"
activiti:resultVariable="myVar" />
Java
public class MyService {
public String toUpperCase(String val) {
return val.toUpperCase();
}
}
The returned String is assigned to activiti:resultVariable
HACKING THE DATA MODEL DIRECTLY
Option (2) – use the execution environment
Java
public class MyService implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
String myVar = (String) execution.getVariable("myVar");
execution.setVariable("myVar", myVar.toUpperCase());
}
}
By contrast here we are being passed an ‘execution’, and we are pulling values out of it and twiddling them and putting them back.
This is somewhat analogous to a Servlet taking values we are passed in the HTMLRequest and then based on them doing different things in the response. (A stronger analogy would be a servlet Filter)
So in your particular instance (depnding on how you are invoking the shell script) using the Expression Language (EL) might be simplest and easiest.
Of course the value you want to pass has to be one that the process knows about (otherwise how can it pass a value it doesn't have a variable for?)
Hope that helps. :D
Usually in BPM engines you have a way to hook out listener to these kind of events. In Activiti if you are embedding it inside your service you can add an extra EventListener and then record the taskCompleted events which will contain the current logged in user.
https://www.activiti.org/userguide/#eventDispatcher
Hope this helps.
I have used activiti:taskListener from activiti app you need to configure below properties
1. I changed properties in task listener.
2. I used java script variable for holding task.assignee value.
Code Snip:-

Lumen - seeder in Unit tests

I'm trying to implement unit tests in my company's project, and I'm running into some weird trouble trying to use a separate set of data in my database.
As I want tests to be performed in a confined environment, I'm looking for the easiest way to input data in a dedicated database. Long story short, to this extent, I decided to use a MySQL dump of inserted data.
This is basically my seeder code:
public function run()
{
\Illuminate\Support\Facades\DB::unprepared(file_get_contents(__DIR__ . '/data1.sql'));
}
Now here's the problem.
In my unit test, I can call the seeder, but :
If I call the seeder in the setUpBeforeClass(), it works. Although it doesn't fit my needs as I want to be able to invoke different sets of data for different tests
If I call the seeder within a test, the data is never inserted in the database (either with or without the transaction trait).
If I use DB::insert instead of ::raw or ::unprepared or ::statement without using a raw sql file, it works. But my inserts are too complicated for that.
Here's a few things I tried with the same results :
DB::raw(file_get_contents(__DIR__.'/database/data1.sql'));
DB::statement(file_get_contents(__DIR__ . '/database/data1.sql'));
$seeder = new CheckTestSeeder();
$seeder->run();
\Illuminate\Support\Facades\Artisan::call('db:seed', ['--class' => 'CheckTestSeeder']);
$this->seeInDatabase('jackpot.progressive', [
'name_progressive' => 'aaa'
]);
Any pointers on how to proceed and why I have different behaviors if I do that in the setUpBeforeClass() and within the test would be appreciated!
You may use Illuminate\Foundation\Testing\RefreshDatabase trait as explained here. If you need something more, you can override refreshTestDatabase method in RefreshDatabase trait.
protected function refreshTestDatabase()
{
parent::refreshTestDatabase();
\Illuminate\Support\Facades\Artisan::call('db:seed', ['--class' => 'CheckTestSeeder']);
}