Redux-observable returning some sample data - redux-observable

I am using redux-observable and I need to return some sample data as an Observable. I have the following code in one of my epics
const sampleFilteredStores = Observable.of([{ type: 'FILTERED_STORES', store: 'a' }, { type: 'FILTERED_STORES', store: 'b' }]);
const filteredStores$ = action$.ofType('SEARCH_STORES').mapTo(Observable.of(sampleFilteredStores).mergeAll());
return filteredStores$;
However when I run this I get an error
instrument.js:56 Uncaught Error: Actions must be plain objects. Use custom middleware for async actions.(…)
What am I doing wrong here and how do I fix this?

Investigation
In the case of the example code you gave, the first thing we need to do is indent and format the code so that it's easier to understand what is happening.
const somethingEpic = action => {
const sampleFilteredStores = Observable.of([
{ type: 'FILTERED_STORES', store: 'a' },
{ type: 'FILTERED_STORES', store: 'b' }
]);
const filteredStores$ = action$.ofType('SEARCH_STORES')
.mapTo(
Observable.of(sampleFilteredStores)
.mergeAll()
);
return filteredStores$;
};
Exactly how you format your code is your choice, but I personally find something like this much more readable. It will help you debug but also significantly help any future maintainers of your code understand your intent.
mapTo
Now I can see right away one problem, which is that you're passing an Observable to mapTo, which is highly unusual in Rx. It's certainly not wrong 100% of the time, but 99.99% and even in the 0.01% there would be much clearer ways to show the desired intent.
Observable.of
Digging in further, I see two usages of Observable.of.
The first one that stuck out is you pass an Observable to Observable.of: Observable.of(sampleFilteredStores) The same advice as mapTo applies here, this is very uncommon and not recommended because it creates higher order Observables unnecessarily. I do see you use mergeAll() to flatten it, but that gives you an Observable that is basically identical to what sampleFilteredStores is without the indirection.
When I dig even deeper I notice another subtle, but critical thing, you pass an array of actions to Observable.of. This is also highly suspicious because that means you create an Observable that just emits an array of two actions, not emits those two actions sequentially directly. If the later is what you intended, instead you needed to pass the objects directly as arguments themselves. Observable.of(action1, action2, action3, ...etc). You may have gotten confused from seeing someone use Observable.from passing in an array, but that's different than Observable.of
Root Cause
Combining those discoveries I can now see that this epic actually emits an Observable, rather than actions, which is why you're receiving the error from redux. That Observable itself actually would emit an array of actions, so even if you had flatten that Observable out you still would receive the same error.
Solution
It appears the provided code is likely contrived either to simplify your question or as you were learning Rx or redux-observable. But in this specific case I believe you wanted to listen for SEARCH_STORES and when received sequentially dispatch two actions both of type FILTERED_STORES with differing store values.
Using idiomatic Rx, that could look something like this:
const somethingEpic = action => {
return action$.ofType('SEARCH_STORES')
.mergeMap(() => Observable.of(
{ type: 'FILTERED_STORES', store: 'a' },
{ type: 'FILTERED_STORES', store: 'b' }
))
};
Here we're using mergeMap, but since the Observable.of we flatten in emits synchronously we could have used switchMap or concatMap too, they would have had the same net effect--but that's not the case for Observables that emit async! So definitely study up on the various flattening strategy operators.
This chain can be described as: Whenever we receive object with the property type equals SEARCH_STORES, map it to an Observable of two objects (the FILTERED_STORES actions) that emit sequentially and synchronously.
Closing
Hopefully this helps! One thing to keep in mind when learning and using redux-observable is that it is almost entirely "just RxJS" that happens to be dealing with object which are "actions". So normal, idiomatic Rx is normal idiomatic redux-observable. Same with the problems you might encounter. The only real difference is that redux-observable provides the single ofType operator as shorthand to a filter (as the docs describe). If you have Rx issues in the future, you might find it helpful to refactor your examples to use filter and phrase them agnostic of redux-observable since the Rx community is obviously much larger!

Related

Array of objects

Let's say I want to connect to two package repositories, make a query for a package name, combine the result from the repos and process it (filter, unique, prioritize,...), What is a good way to do that?
What I though about is creating Array of two Cro::HTTP::Client objects (with base-uri specific to each repo), and when I need to make HTTP request I call #a>>.get, then process the result from the repos together.
I have attached a snippet of what I'm trying to do. But I would like to see if there is a better way to do that. or if the approach mention in the following link is suitable for this use case! https://perl6advent.wordpress.com/2013/12/08/day-08-array-based-objects/
use Cro::HTTP::Client;
class Repo {
has $.name;
has Cro::HTTP::Client $!client;
has Cro::Uri $.uri;
has Bool $.disable = False;
submethod TWEAK () {
$!client = Cro::HTTP::Client.new(base-uri => $!uri, :json);
}
method get (:$package) {
my $path = <x86_64?>;
my $resp = await $!client.get($path ~ $package);
my $json = await $resp.body;
return $json;
}
}
class AllRepos {
has Repo #.repo;
method get (:$package) {
# check if some repos are disabled
my #candidate = #!repo>>.get(:$package).unique(:with(&[eqv])).flat;
# do furthre processign of the data then return it;
return #candidate;
}
}
my $repo1 = Repo.new: name => 'repo1', uri => Cro::Uri.new(:uri<http://localhost:80>);
my $repo2 = Repo.new: name => 'repo2', uri => Cro::Uri.new(:uri<http://localhost:77>);
my #repo = $repo1, $repo2;
my $repos = AllRepos.new: :#repo;
#my #packages = $repos.get: package => 'rakudo';
Let's say I want to connect to two package repositories, make a query for a package name, combine the result from the repos and process it (filter, unique, prioritize,...), What is a good way to do that?
The code you showed looks like one good way in principle but not, currently, in practice.
The hyperoperators such as >>:
Distribute an operation (in your case, connect and make a query) ...
... to the leaves of one or two input composite data structures (in your case the elements of one array #!repo) ...
... with logically parallel semantics (by using a hyperoperator you are declaring that you are taking responsibility for thinking that the parallel invocations of the operation will not interfere with each other, which sounds reasonable for connecting and querying) ...
... and then return a resulting composite data structure with the same shape as the original structure if the hyperoperator is a unary operator (which applies in your case, because you applied >>, which is an unary operator which takes a single argument on its left, so the result of the >>.get is just a new array, just like the input #!repo) or whose shape is the hyper'd combination of the shapes of the pair of structures if the hyperoperator is a binary operator, such as >>op<< ...
... which can then be further processed (in your case it is, with .unique, which will produce a resulting Seq) ...
... whose elements you then assign back into another array (#candidate).
So your choice is a decent fit in principle, but the commitment to parallelism is only semantic and right now the Rakudo compiler never takes advantage of it, so it will actually run your code sequentially, which presumably isn't a good fit in practice.
Instead I suggest you consider:
Using map to distribute an operation over multiple elements (in a shallow manner; map doesn't recursively descend into a deep structure like the hyperoperators, deepmap etc., but that's OK for your use case) ...
... in combination with the race method which parallelizes the method it proceeds.
So you might write:
my #candidate =
#!repo.hyper.map(*.get: :$package).unique(:with(&[eqv])).flat;
Alternatively, check out task 94 in Using Perl 6.
if the approach mention in the following link is suitable for this use case! https://perl6advent.wordpress.com/2013/12/08/day-08-array-based-objects/
I don't think so. That's about constructing a general purpose container that's like an array but with some differences to the built in Array that are worth baking into a new type.
I can just about imagine such things that are vaguely related to your use case -- eg an array type that automatically hyper distributes method calls invoked on it, if they're defined on Any or Mu (rather than Array or List), i.e. does what I described above but with the code #!repo.get... instead of hyper #!repo.map: *.get .... But would it be worth it (assuming it would work -- I haven't thought about it beyond inventing the idea for this answer)? I doubt it.
More generally...
It seems like what you are looking for is cookbook like material. Perhaps a question posted at the reddit sub /r/perl6 is in order?

populate object from command line and check object state

I populate an object based on the users input from the commandline.
The object needs to have a certain amount of data to proceed. My solution so far is nested if-statements to check if the object is ready. Like below example.
Maybe 3 if-statements aren't so bad(?) but what if that number of if-statements starts to increase? What are my alternatives here? Let's say that X, Y and Z are three completely different things. For example let's say that object.X is a list of integers and object.Y is a string and maybe Z is some sort of boolean to return true only if object.Y has a certain amount of values?
I'm not sure polymorhism will work in this case?
do
{
if (object.HasX)
{
if (object.HasY)
{
if (object.HasZ)
{
//Object is ready to proceed.
}
else
{
//Object is missing Z. Handle it...
}
}
else
{
//Object is missing Y. Handle it...
}
}
else
{
//Object is missing X. Handle it...
}
} while (!String.IsNullOrEmpty(line));
For complex logic workflow, I have found, it's important for maintainability to decide which level of abstraction the logic should live in.
Will new logic/parsing rules have to be added regularly?
Unfortunately, there isn't a way to avoid having to do explicit conditionals, they have to live somewhere.
Some things that can help keep it clean could be:
Main function is only responsible for converting command line arguments to native datatypes, then it pushes the logic down to an object builder class, This will keep main function stable and unchanged, except for adding flag descriptions, THis should keep the logic out of the domain, and centralized to the builder abstraction
Main function is responsible for parsing and configuring the domain, this isolates all the messy conditionals in the main/parsing function and keeps the logic outside of the domain models
Flatten the logic, if not object.hasX; return, next step you know has.X, this will still have a list of conditionals but will be flatter
Create a DSL declarative rule language (more apparent when flattening). This could be a rule processor, where the logic lives, then the outer main function could define that states that are necessary to proceed

Alternative to the try (?) operator suited to iterator mapping

In the process of learning Rust, I am getting acquainted with error propagation and the choice between unwrap and the ? operator. After writing some prototype code that only uses unwrap(), I would like to remove unwrap from reusable parts, where panicking on every error is inappropriate.
How would one avoid the use of unwrap in a closure, like in this example?
// todo is VecDeque<PathBuf>
let dir = fs::read_dir(&filename).unwrap();
todo.extend(dir.map(|dirent| dirent.unwrap().path()));
The first unwrap can be easily changed to ?, as long as the containing function returns Result<(), io::Error> or similar. However, the second unwrap, the one in dirent.unwrap().path(), cannot be changed to dirent?.path() because the closure must return a PathBuf, not a Result<PathBuf, io::Error>.
One option is to change extend to an explicit loop:
let dir = fs::read_dir(&filename)?;
for dirent in dir {
todo.push_back(dirent?.path());
}
But that feels wrong - the original extend was elegant and clearly reflected the intention of the code. (It might also have been more efficient than a sequence of push_backs.) How would an experienced Rust developer express error checking in such code?
How would one avoid the use of unwrap in a closure, like in this example?
Well, it really depends on what you wish to do upon failure.
should failure be reported to the user or be silent
if reported, should one failure be reported or all?
if a failure occur, should it interrupt processing?
For example, you could perfectly decide to silently ignore all failures and just skip the entries that fail. In this case, the Iterator::filter_map combined with Result::ok is exactly what you are asking for.
let dir = fs::read_dir(&filename)?;
let todos.extend(dir.filter_map(Result::ok));
The Iterator interface is full of goodies, it's definitely worth perusing when looking for tidier code.
Here is a solution based on filter_map suggested by Matthieu. It calls Result::map_err to ensure the error is "caught" and logged, sending it further to Result::ok and filter_map to remove it from iteration:
fn log_error(e: io::Error) {
eprintln!("{}", e);
}
(|| {
let dir = fs::read_dir(&filename)?;
todo.extend(dir
.filter_map(|res| res.map_err(log_error).ok()))
.map(|dirent| dirent.path()));
})().unwrap_or_else(log_error)

Reuse the description of an existing Error when creating a new Error

I have the following code in Rust, which does not compile, but shows the intent of what I'd like to do.
pub fn parse(cursor: &mut io::Cursor<&[u8]>) -> io::Result<Ack> {
use self::byteorder::{BigEndian, ReadBytesExt};
use self::core::error::Error;
match cursor.read_u16::<BigEndian>() {
Err(byteorder::Error::Io(error)) => Err(error),
Err(error) =>
Err(io::Error::new(io::ErrorKind::Other, error.description(),
None)),
Ok(value) => Ok(Ack { block_number: value })
}
}
Essentially, I want to take the error description of an error returned by the byteorder library and use it to create the description of an error I'll pass back to the user of my library. This fails with packets.rs:166:58: 166:63 error:errordoes not live long enough, and I understand why.
The byteorder library solves this issue by wrapping an std::io::Result in the byteorder::Error::Io constructor. However, I don't want to take this route because I'd have to define my own error type that wraps either an std::io::Error or a byteorder::Error. It seems to me that my users shouldn't know or care that I use the byteorder library, and it shouldn't be part of my interface.
I'm a Rust newbie and don't yet know the idioms and best practices of the language and design. What are my options for dealing with this?
Your problem is in fact in that io::Error::new()'s second parameter is &'static str, while byteorder::Error::description() returns a &'a str where 'a is lifetime of the error object itself which is less than 'static. Hence you can't use it for io::Error's description.
The simplest fix would be moving byteorder::Error description to detail field of io::Error:
Err(error) =>
Err(io::Error::new(
io::ErrorKind::Other,
"byteorder error",
Some(error.description().to_string())
)),
However, you should seriously consider making a custom wrapper error type which encapsulates all "downstream" errors. With properly written FromError instances you should be able to write something like
try!(cursor.read_u16::<BigEndian>()
.map(|value| Ack { block_number: value }))
instead of your whole match. Custom error wrappers will also help you when your program grows and more "downstream" error sources appear - you could just add new enum variants and/or FromError implementations to support these new errors.
I cannot test your code so I can't be sure. Isn't the ref keyword enough?
Err(byteorder::Error::Io(ref error)) => Err(error),

How to call xhr in Dojo using TypeScript

I'm using Typescript (new to it), and Dojo, and I need to make an async call. This is pretty easy when you're not using TypeScript. But, the compiler is making things harder, especially because I do not allow "implicit any". I also like the lambda-style callbacks, but the compiler complained about the "implicit any" there, too. Mostly, I'm getting confused between the Deferred, and the Promise, and how to import the promise module.
Is there anybody with Typescript/Dojo experience who can tell me if I got this right? And, is there any way to improve it?
import xhr = require("dojo/request/xhr");
private getDataAsync(url:string, param:any):dojo.promise.Promise {
var deferred = new Deferred();
var options: any = {
handleAs: 'json',
query: {
'param': param
}
};
xhr.get(url, options).then(
lang.hitch(this, function(data:any) {
var returnValue = this.doSomething(data);
deferred.resolve(returnValue);
}),
function(err:any) {
deferred.reject(err, true);
}
);
return deferred.promise;
}
Moreover, do I even need to use Dojo's xhr here? Is there something built into TypeScript that wraps XMLHTTPRequest in a browser-neutral way, the way that dojo does?
I think the problem is that the Deferred and Promise classes in the dojo.d.ts appear to be long overdue for an update. They do not have a generic type parameter for the result type, and the then callbacks are just Function, so they capture nothing about the shape of the function. This doesn't take advantage of TypeScript 1.0, never mind 1.4.
Compare with es6-promise.d.ts, in which Promise<R> has a method then<U>, where R is the value coming out of the promise and U is the value produced by then's resolve handler, and thus the next promise will be Promise<U>. So chains of operations on promises are strongly typed and everything works beautifully.
If you make similar improvements to dojo.d.ts and set them a pull request, they'll probably be quite grateful! :)
Is there something built into TypeScript that wraps XMLHTTPRequest in a browser-neutral way
No. TypeScript has very minimal runtime environment only for helping the compiler generate valid code (pretty much only the __extends function).
I also like the lambda-style callbacks, but the compiler complained about the "implicit any" there, too
This is natural. The compiler does not know the result of the XHR, if you know it specify it using some interface, or you can tell the compiler that you don't want type safety and use any as you are doing already.
Update 1
I'm still stuck on the differences between dojo.promise.Promise, deferred.promise, and Deferred
Promise is a promise : https://github.com/promises-aplus/promises-spec
Deferred is something that has a promise (.promise) as well as nice handles (.resolve and .reject) to determine the fate of the said promise.