I've been using Redux recently on a ReactJS project and I've come cross an interesting situation where I need to generate some type of error message that is shown to the user. These could be validation error types of messages or possibly messages that report on the state of the system. There may be no perscribed way to do this but what I'm curious about is whether to put the logic which determines the error message into my reducers or referenced by my reducers or whether to place it in my ReactJS components.
Transform error codes into messages
const getIrsFormSubmissionErrorMessage = (errorCode) => {
switch(errorCode) {
case "server_error": return "Your submission failed because of a problem on the server";
case "authorization_failure": return "You are not allowed to submit this form.";
case "validation_error": return "One or more of the values entered are invalid.";
case undefined: return undefined;
default: return "Your submission failed, please contact support.";
}
}
Ex. 1 Error messages derived in reducer and shown in component:
function irsForm(state, action) {
switch (action.type) {
case "SUBMIT_IRS_FORM_FAILED":
return {
...state,
submitRequest: {
executing: false,
errorMessage: getIrsFormSubmissionErrorMessage(action.errorCode)
},
};
default:
return state;
}
}
const FormError = (props) =>
<span className="form-error">{props.errorMessage)<span>
Ex. 2 Error messages derived in component:
function irsForm(state, action) {
switch (action.type) {
case "SUBMIT_IRS_FORM_FAILED":
return {
...state,
submitRequest: {
executing: false,
errorMessage: action.errorCode,
},
};
default:
return state;
}
}
const FormError = (props) =>
<span className="form-error">{getIrsFormSubmissionErrorMessage(props.errorCode)}<span>
Other options?
It's also possible that I'm missing some other pattern or way of going about this that is different from the two examples below. If that's the case and there is a compelling reason to do it that way then that would be great too.
Arguments in favor of putting message content closer to the view:
Handling hyper-links and other styling related aspects of text within error messages may make the view a better place to locate these transformations or at least the content that will be displayed as a result.
Writing unit tests around the reducer will likely be easier to comprehend if we compare the types of expected error codes and not focus on the phrasing of the message.
Localization may require configuration and other types of values which may not be in the category of pure functions.
Automated test cases won't need to be cluttered with localization specific details.
Arguments in favor of putting message content closer to the reducers:
If you are unit testing reducers because of their functional simplicity you get more coverage over this part of your application logic without testing your views.
Organizing logic into reducers and making views more anaemic may make it easier for developers to predict where to find each type of code.
Any custom localization logic may be easier to debug or test if it is kept within reducers.
These arguments remind me of a few related things from both DDD (Domain Driven Design) and CQRS (Command Query Responsibility Segregation). In DDD there is sometimes debate on whether or not and to what degree view only data should be present on object within the domain model. If a property/field/data isn't used to identify or to support the domain logic itself then should it be along for the ride? With CQRS it's a even though reads writes have already been agreed to be separated I think there is varying degrees in which you can denormalize the read-models which are created in response to the domain events. Examples can range from a read model that exists in a Relational Database to a set of statically generated web pages.
I'm sure I'm left out several other important aspects but this is what I could come up with off the top of my head.
Related
I am attempting to put together a Cro service that has a react/whenever block consuming data "in the background" So unlike many examples of websocket usage with Cro, this has nothing to do with routes that may be accessed via the browser.
My use case is to consume message received via an MQTT topic and do some processing with them. At a later stage in development I might create a supply out of this data, but for now, when data is received it will be stored in a variable and dependant on certain conditions, be sent to another service via a http post.
My thought was to include a provider() in the Cro::HTTP::Server setup like so:
use Cro::HTTP::Log::File;
use Cro::HTTP::Server;
use Routes;
use DataProvider; # Here
my Cro::Service $http = Cro::HTTP::Server.new(
http => <1.1>,
host => ...,
port => ...,
application => [routes(), provider()], # Made this into an array of subs?
after => [
Cro::HTTP::Log::File.new(logs => $*OUT, errors => $*ERR)
]
);
And in the DataProvider.pm6:
use MQTT::Client;
sub provider() is export {
my $mqtt = MQTT::Client.new: server => 'localhost';
react {
whenever $mqtt.subscribe('some/mqtt/topic') {
say "+ topic: { .<topic> } => { .<message>.decode("utf8-c8") }";
}
}
}
This throws a bunch of errors:
A react block:
in sub provider at DataProvider.pm6 (DataProvider) line 5
in block <unit> at service.p6 line 26
Died because of the exception:
Invocant of method 'write' must be an object instance of type
'IO::Socket::Async', not a type object of type 'IO::Socket::Async'. Did
you forget a '.new'?
in method subscribe at /home/cam/raku/share/perl6/site/sources/42C762836A951A1C11586214B78AD34262EC465F (MQTT::Client) line 133
in sub provider at DataProvider.pm6 (DataProvider) line 6
in block <unit> at service.p6 line 26
To be perfectly honest, I am totally guessing that this is how I would approach the need to subscribe to data in the background of a Cro service, but I was not able to find any information on what might be considered the recommended approach.
Initially I had my react/whenever block in the main service.pm6 file, but that did not seem right. And needed to be wrapped in a start{} block because as I have just learned, react is blocking :) and cro was not able to actually start.
But following the pattern of how Routes are implemented seemed logical, but I am missing something. The error speaks about setting up a new method, but I'm not convinced that is the root cause. Routes.pm6 does not have a constructor.
Can anyone point me in the right direction please?
Thanks to all who have provided information, this has been a very valuable learning exercise.
The approach of passing additional sub routines, along side router() in the application parameter to Cro::HTTP::Server.new gave further trouble. (an array is not allowed, and broke routing)
Instead, I have moved the background work into a class of it's own, and given it a start and stop method more akin to Cro::HTTP::Server.
My new approach:
service.pm6
use Cro::HTTP::Log::File;
use Cro::HTTP::Server;
use Routes;
use KlineDataSubscriber; # Moved mqtt functionality here
use Database;
my $dsn = "host=localhost port=5432 dbname=act user=.. password=..";
my $dbh = Database.new :$dsn;
my $mqtt-host = 'localhost';
my $subscriber = KlineDataSubscriber.new :$mqtt-host;
$subscriber.start; # Inspired by $http.start below
my Cro::Service $http = Cro::HTTP::Server.new(
http => <1.1>,
host => ...,
port => ...,
application => routes($dbh), # Basically back the way it was originally
after => [
Cro::HTTP::Log::File.new(logs => $*OUT, errors => $*ERR)
]
);
$http.start;
say "Listening at...";
react {
whenever signal(SIGINT) {
say "Shutting down...";
$subscriber.stop;
$http.stop;
done;
}
}
And in KlineDataSubscriber.pm6
use MQTT::Client;
class KlineDataSubscriber {
has Str $.mqtt-host is required;
has MQTT::Client $.mqtt = Nil;
submethod TWEAK() {
$!mqtt = MQTT::Client.new: server => $!mqtt-host;
await $!mqtt.connect;
}
method start(Str $topic = 'act/feed/exchange/binance/kline-closed/+/json') {
start {
react {
whenever $!mqtt.subscribe($topic) {
say "+ topic: { .<topic> } => { .<message>.decode("utf8-c8") }";
}
}
}
}
method stop() {
# TODO Figure how to unsubscribe and cleanup nicely
}
}
This feels much more "Cro idiomatic" to me, but I would be happy to be corrected.
More importantly, it works as expected and I feel is somewhat future proof. I should be able to create a supply to make real-time data available to the router, and push data to any connected web clients.
I also intend to have a http GET endpoint /status with various checks to ensure everything healthy
The root cause
The error speaks about setting up a new method, but I'm not convinced that is the root cause.
It's not about setting up a new method. It's about a value that should be defined instead being undefined. That typically means a failure to attempt to initialize it, which typically means a failure to call .new.
Can anyone point me in the right direction please?
Hopefully this question helps.
Finding information on a recommended approach
I am totally guessing that this is how I would approach the need to subscribe to data in the background of a Cro service, but I was not able to find any information on what might be considered the recommended approach.
It might be helpful for you to list which of the get-up-to-speed steps you've followed from Getting started with Cro, including the basics but also the "Learn about" steps at the end.
The error message
A react block:
in sub provider ...
Died because of the exception:
...
in method subscribe ...
The error message begins with the built in react construct reporting that it caught an exception (and handled it by throwing its own exception in response). A "backtrace" corresponding to where the react appeared in your code is provided indented from the initial "A react block:".
The error message continues with the react construct summarizing its own exception (Died because ...) and explains itself by reporting the original exception, further indented, in subsequent lines. This includes another backtrace, this time one corresponding to the original exception, which will likely have occurred on a different thread with a different callstack.
(All of Raku's structured multithreading constructs[1] use this two part error reporting approach for exceptions they catch and handle by throwing another exception.)
The first backtrace indicates the react line:
in sub provider at DataProvider.pm6 (DataProvider) line 5
use MQTT::Client;
sub provider() is export {
my $mqtt = MQTT::Client.new: server => 'localhost';
react {
The second backtrace is about the original exception:
Invocant of method 'write' must be an object instance of type
'IO::Socket::Async', not a type object of type 'IO::Socket::Async'. ...
in method subscribe at ... (MQTT::Client) line 133
This reports that the write method called on line 133 of MQTT::Client requires its invocant is an instance of type 'IO::Socket::Async'. The value it got was of that type but was not an instance, but instead a "type object". (All values of non-native types are either type objects or instances of their type.).
The error message concludes with:
Did you forget a '.new'?
This is a succinct hint based on the reality that 99 times out of a hundred the reason a type object is encountered when an instance is required is that code has failed to initialize a variable. (One of the things type objects are used for is to serve the role of "undefined" in languages like Perl.)
So, can you see why something that should have been an initialized instance of 'IO::Socket::Async' is instead an uninitialized one?
Footnotes
[1] Raku's constructs for parallelism, concurrency, and asynchrony follow the structured programming paradigm. See Parallelism, Concurrency, and Asynchrony in Raku for Jonathan Worthington's video presentation of this overall approach. Structured constructs like react can cleanly observe, contain, and manage events that occur anywhere within their execution scope, including errors such as error exceptions, even if they happen on other threads.
You seem to be fine now but when I first saw this I made this https://github.com/jonathanstowe/Cro-MQTT which turns the MQTT client into a first class Cro service.
I haven't released it yet but it may be instructive.
i use reactive Mongo Drivers and Web Flux dependancies
I have a code like below.
public Mono<Employee> editEmployee(EmployeeEditRequest employeeEditRequest) {
return employeeRepository.findById(employeeEditRequest.getId())
.map(employee -> {
BeanUtils.copyProperties(employeeEditRequest, employee);
return employeeRepository.save(employee)
})
}
Employee Repository has the following code
Mono<Employee> findById(String employeeId)
Does the thread actually block when findById is called? I understand the portion within map actually blocks the thread.
if it blocks, how can I make this code completely reactive?
Also, in this reactive paradigm of writing code, how do I handle that given employee is not found?
Yes, map is a blocking and synchronous operation for which time taken is always going to be deterministic.
Map should be used when you want to do the transformation of an object /data in fixed time. The operations which are done synchronously. eg your BeanUtils copy properties operation.
FlatMap should be used for non-blocking operations, or in short anything which returns back Mono,Flux.
"how do I handle that given employee is not found?" -
findById returns empty mono when not found. So we can use switchIfEmpty here.
Now let's come to what changes you can make to your code:
public Mono<Employee> editEmployee(EmployeeEditRequest employeeEditRequest) {
return employeeRepository.findById(employeeEditRequest.getId())
.switchIfEmpty(Mono.defer(() -> {
//do something
}))
.map(employee -> {
BeanUtils.copyProperties(employeeEditRequest, employee);
return employee;
})
.flatMap(employee -> employeeRepository.save(employee));
}
I need to verify log information(error messages) along with the result set. Here logging can be also understood as report generation in my case.
Externalized Logging
Should I store the log messages(for any errors) along with result and do the logging after the business logic step
Advantages:
This gives me the log information that I can use to verify negative cases during unit testing, versus parsing the log file.
Separate out the logging from business logic.
I can implement logging as a separate feature, where I can log in different formats based on implementation (HTML, JSON, etc)
Disadvantages
This will have code duplication, as I end up with the same loops for logging as for the computation of the result set.
During the logging phase the parent will have to fetch the child info. And storing all this info makes it complex and unreadable.
Internalized Logging
Should I do the logging at the same time as I perform business logic
Advantages
Not storing any information and simplifying the solution and effectively passing the context of the parent objects to the child object,
Logging when exception occurs.
Disadvantages
But not able to separate logging/reporting from business logic.
I will not get a log information to verify negative cases for unit-tests. So I will need to parse the log file to verify.
More context below:
I am building this tool for comparison of properties in two resources that can be of type - JSON, properties, VMs, REST API, etc in Python.
The tool reads a metadata json having a structure like follows:
{
"run-name": "Run Tests"
"tests": [
{
"name": "Test 1",
"checks":[
{
"name": "Dynamic Multiple",
"type": "COMPARE",
"dynamic": [
{
"file": "source.json",
"type": "JSON",
"property": "topology.wlsClusters.[].clusterName"
}
],
"source": {
"file": "source.json",
"type": "JSON",
"property": "topology.wlsClusters.[clusterName == ${1}].Xms"
},
"target": {
"file": "target.properties",
"type": "PROPERTY",
"property": "fusion.FADomain.${1}.default.minmaxmemory.main",
"format": "-Xms{}?"
}
},
]
}
]
}
The above JSON tells my tool to:
Fetch 'clusterName' from each wlsCluster object in topology.wlsClusters. This gives a list of 'clusterNames'.
From 'source.json' fetch Xms values from each wlsCluster object where 'clusterName' belongs to the above list.
Similarly fetch a all Xms values from target.properties file using the above list.
Compare each value from source Xms list to target Xmx list.
If all match then SUCCESS else FAILURE.
Intuitively above JSON can be mapped to its corresponding objects as:
Test
Check
Resource
Now ideally I know I should be doing following steps:
Run all tests, and all checks in each test.
For each test if type is compare
Read and compute 'dynamic' values
Read 'source' and replace dynamic values in property field and fetch the corresponding properties
Similarly read 'target' and fetch the corresponding properties
Compare and return 'PASSED' or 'FAILED'
So broadly I have these steps:
FETCH and STORE VALUES.
COMPARE VALUES
I also want to print logs in following format.
[<TIMESTAMP> <RUN-NAME> <TEST-NAME> <CHECK-NAME> <ERROR-LEVEL> <MESSAGE-TYPE> <RESOURCE-NAME>] custom-msg
where
ERROR-TYPE: INFO DEBUG etc
MESSAGE-TYPE: COMPARE, SYNATAX-ERROR, MISSING-PROPERTY etc
Now if I follow the above object model, and each object is responsible for handling it's own logging, it would not have all this information. So I need to either:
pass this information down to the child objects,
or have the parent read the information of the child object.
I prefer the second approach as then I can store the results of fetch and delay the logging (if any) to after comparison. In this way I can also run validations (unit-tests) as I validate the error message(negative scenario) as well.
But this where my solution is getting complicated.
I need to store the results of fetch in each object, which can be the value found or 'None' when no value is found. Now I need to store the error type and error message as well when no value is found. Lets call this class Value.
Each Property can have produce a list of such Value.
Each Resource can produce a list of such Property.
Each Check can produce a list of such Resource.
NOTE: This is developed in Python. (If it matters to you.)
Each class should be responsible of it's own state. When you let classes make decisions based on properties in other classes you will end up with spagetti code eventually.
Code like if (test.check.resource.AProperty == aValue) is a clear indication that your spagetti have started cooking.
In this case you want to log at all in the classes. You want o decide whether a sequence of actions were completed successfully or not. And as a consequence of that, you want to log the result.
With that in mind, don't let the classes log at all, but only report what they tested/checked and the result of that.
A common approach is to supply a context object which are used to receive the result.
Here is some c# code to illustrate (i don't know python well enough):
interface VerifierContext
{
void AddSuccess(string checkName, string resourceName, string message);
void AddFailure(string checkName, string resourceName, SomeEnum failureType, string message);
}
public class SomeChecker
{
public void Validate(VerifierContext context)
{
context.AddFailure("CompanyNameLength", "cluster.Company", SomeEnum.LengthRestriction, "Company name was 30chars, can only be 10");
}
}
That will give you a flat list of validations. If you want to get nested you can add Enter/Exit methods:
public class SomeChecker
{
public void Validate(VerifierContext context)
{
context.Enter("CompanyValidations");
foreach (var validator in _childValidators)
validator.Validate(context);
context.Exit("CompanyValidations");
}
}
You can of course design it in a lot of different ways. My main point is that each class in your check/parser should just make a decision on if everything went OK or not. It should not decide how things should be logged.
The class that triggers the work can then go through all result and choose log level depending on the errorType etc.
All classes are also easily tested since they only depend on the context.
I am moving some data from the vuex store into its own module. Most of it works great, but I'm running into an issue that I can't seem to fix.
I have a plugin that I add, and that plugin also needs access to the store.
So at the top of that plugin, we import the store:
import store from '../store/store';
great - further down that plugin, I'm accessing the data of the store in a method that I expose on the service:
hasPermission(permission) {
return (store.state.authorization.permissions.indexOf(permission) >= 0);
}
Please note authorization is now a seperate module, and is no longer part of the root state object.
now the funny thing is that the above will return an error telling me indexOf is not a function.
When I add the following, however:
hasPermission(permission) {
console.log('Validating permission ' + permission);
console.log(store.state);
return (store.state.authorization.permissions.indexOf(permission) >= 0);
}
I notice that (1) the output to console is what I expect it to be, and (2), I'm not getting the error, and my menu structure dynamically builds as expected...
so I'm a bit confused to say the least...
authorization.permissions is updated each time a user authenticates, logs out, or chooses another account to work on; in these cases we fetch updated permissions from the server and commit it to the store, so we can build our menu structure based on up-to-date permissions. Works pretty good, but I'm not sure I understand why this fails to be honest.
The plugin is created as follows in the install:
Vue.prototype.$security = new Vue(
...
methods: {
hasPermission: function(permission) {
...
}
}
...
);
Given my App will download files from a server and I only want 1 download to be progressed at the same time, then how could this be done with RxAlamofire? I might simply be missing an Rx operator.
Here's the rough code:
Observable
.from(paths)
.flatMapWithIndex({ (ip, idx) -> Observable<(Int, Video)> in
let v = self.files![ip.row] as! Video
return Observable.from([(idx, v)])
})
.flatMap { (item) -> Observable<Video> in
let req = URLRequest(url: item.1.downloadURL())
return Api.alamofireManager()
.rx
.download(req, to: { (url, response) -> (destinationURL: URL, options: DownloadRequest.DownloadOptions) in
...
})
.flatMap({ $0.rx.progress() })
.flatMap { (progress) -> Observable<Float> in
// Update a progress bar
...
}
// Only propagate finished items
.filter { $0 >= 1.0 }
// Return the item itself
.flatMap { _ in Observable.from([item.1]) }
}
.subscribe(
onNext: { (res) in
...
},
onError: { (error) in
...
},
onCompleted: {
...
}
)
My problem is a) RxAlamofire will download multiple items at the same time and b) the (progress) block is called multiple times for those various items (with different progress infos on each, causing the UI to behave a bit weird).
How to ensure the downloads are done one by one instead of simultaneously?
Does alamofireManager().rx.download() download concurrently or serially?
I'm not sure how it does, so test that first. Isolate this code and see if it does execute multiple downloads at once. If it does, then read up on the documentation for serial downloads instead of concurrent downloads.
If it downloads one at a time, then it means it has something to do with your Rx code that triggers the progress bar update issue. If it doesn't download one at a time, then it means we just need to read up on Alamofire's documentation on how to download one at a time.
Complex transformations and side effects
Something to consider is that your data streams are becoming more complex and difficult to debug because so many things are happening in one stream. Because of the multiple flat maps, there can be a lot more emissions coming out affecting the progress bar update. It is also possible that the numerous flat maps operations that acquired an Observable are the cause for the multiple triggering of the updates on the progress bar.
Complex data streams
In one data stream you (a) performed the network call (b) updated the progress bar (c) filtered finished videos (d) and went back to the video you wanted by using flatMapWithIndex at the start to pair together id and the video model so that you can return back to the model at the end. Kind of complicated... My guess is that the weird progress bar updates might be caused by creating a hot observable on call of $0.rx.progress().
I made a github gist of my Rx Playground that tries to model what you're trying to do.
In functional reactive programming, it would be much more readable and easier to debug if you first define your data streams/observables. In my gist, I began with the observables and how I planned to model the download progress.
This code will avoid the concurrency issues if the RxAlamofire query downloads 1 at a time, and it properly presents the progress value for a UIProgressBar.
Side note
Do you need to track the individual progress downloads per download item? Or do you want your progress bar to just increment per finished download item?
Also, be wary with the possible dangers of misusing a chain of multiple flatMaps as explained here.