Consider I have following code:
Flux.fromIterable(List.of(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11))
.flatMap(integer -> {
if (integer == 5) {
throw new RuntimeException("error");
}
return just(Tuples.of(integer, new Random().nextInt()));
})
.onErrorContinue((throwable, o) -> just(Tuples.of(o, 0)))
.log()
.subscribe();
which outputs:
onSubscribe([Fuseable] FluxContextStart.ContextStartSubscriber)
request(unbounded)
onNext([1,-1752848133])
onNext([2,-1719473285])
onNext([3,819220275])
onNext([4,-725013418])
onNext([6,-1693809308])
onNext([7,1457499883])
onNext([8,-740589679])
onNext([9,1718349574])
onNext([10,-861794538])
onNext([11,1016444064])
onComplete()
Is there a way that so I can recover 5 with a default value instead of dropping it?
See onErrorReturn() and onErrorResume(). You probably need to use it inside a flatMap() on the inner Mono over the value to avoid loosing the rest of original Flux values.
Related
Supposed I have a list that has 30 items and the given index is 9, I need to get the items starting index 10 to 19.
Currently doing it in a Java style.
val newsList: ArrayList<Model> = arrayListOf()
// Get all items starting next to the current selected item
for (i in (position + 1) until originalList.size) {
// Limit the amount of item to avoid any possible OOM
if (newsList.size < 10)
newsList.add(list[i])
}
You can use drop and take for this kind of thing
val items = List(30) { i -> "Item ${i+1}"}
items.drop(10).take(10).run(::println)
>> [Item 11, Item 12, Item 13, Item 14, Item 15, Item 16, Item 17, Item 18, Item 19, Item 20]
Also you don't need to worry about how many items are in the collection - if you did drop(69) you'd just end up with an empty list. If you did listOf(1, 2).take(3) you'd just get [1, 2]. They work like "drop/take at most" - you'll only get an error if you use a negative count
I have some troubles with .zip() operator.
Let me simplify my problem on a small example.
Flux<Integer> flux1 = Flux.just(9, 8, 3, -2);
Flux<Integer> flux2 = Flux.just(7);
Flux<Integer> flux3 = Flux.just(6, 5, 4, -4);
List<Flux<Integer>> list1 = Arrays.asList(flux1, flux2, flux3);
TreeSet<Integer> set = new TreeSet<>(Comparator.reverseOrder());
Set<Integer> list = Flux.zip(list1, objects -> {
boolean setChanged = false;
for (Object o : objects) {
Integer i = (Integer) o;
if (set.size() < 5 || i > set.last()) {
setChanged = true;
set.add(i);
if (set.size() > 5) {
set.pollLast();
}
}
}
return setChanged;
}).takeWhile(val -> val)
.then(Mono.just(set))
.block();
System.out.println(list);
Here I have 3 different sources(they are sorted descending by default, and the number of them could be much bigger), and I want to get from them a collection of 5 elements sorted descending. Unfortunately, I can't just use concat() or merge() operators, because sources in a real life can be really big, but I need only small amount of elements.
I am expecting [9, 8, 7, 6, 5] here, but one of the sources is completed after first iteration of zipping.
Could you please suggest how I can get around with this problem?
You can try the reduce operation
#Test
void test() {
Flux<Integer> flux1 = Flux.just(9, 8, 3, -2);
Flux<Integer> flux2 = Flux.just(7, 0, -2, 4,3,2,2,1);
Flux<Integer> flux3 = Flux.just(6, 5, 4, -4);
var k = 5;
List<Flux<Integer>> publishers = List.of(flux1, flux2, flux3);
var flux = Flux.merge(publishers)
.log()
.limitRate(2)
.buffer(2)
.reduce((l1, l2) -> {
System.out.println(l1);
System.out.println(l2);
return Stream.concat(
l1.stream(),
l2.stream()
).sorted(Comparator.reverseOrder())
.limit(k)
.collect(Collectors.toList());
})
.log();
StepVerifier.create(flux)
.expectNext(List.of(9,8,7,6,5))
.expectComplete()
.verify();
}
You can fetch data in chunks and compare them to find the top K elements.
In a sequential case it will fetch a new batch, compare it to the current top k result and return a new topk like in the example above (PriorityQueue may work better for sorting if k is big).
If you're using parallel schedulers and batches are fetched in parallel, then it can compare them with each other independently that should be a bit faster.
Also you have full control over the fetched data via rateLimit, buffer, delayElements, etc
I am in middle of setting up redux for managing state for all my api data. I have infinite flatlist that grow with query with offset and limit i pass to api param.
Now issue remain — I am able to get first set of data but never combine data of all api calls. I am sure i am doing something silly out there.
I am stuck badly here that investing day and night here. Any help will greatly appreciated here.
Reducer:
import { RECEIVED_NEWS } from './actions';
export const news = (state = [], action) => {
//console.log('action data is '+JSON.stringify(action));
switch (action.type) {
case RECEIVED_NEWS:
return [...state, action.apidata];
default:
return state;
}
};
Action:
export const RECEIVED_NEWS = 'RECEIVED_NEWS';
export const addNews = apidata => ({
type: RECEIVED_NEWS,
apidata
});
Sample api data : https://codebeautify.org/online-json-editor/cb73c978 or https://pastebin.com/rS8Aj4ex
Object dir that i print with console http://navgujaratsamay.co.in/wp-content/uploads/2019/02/Screenshot-2019-02-01-at-5.09.41-PM.png
I am expecting merging all all api calls and i am successfully calling store but every time getting only last call data.
You need to do:
return [...state, ...action.apidata]
because action.apidata is a array too, you need to spread it too otherwise it will get nested. If apidata was not an array just an object, then no need to spread it.
eg:
> let arr1 = [1, 2, 3, 4, 5]
> let arr2 = [6, 7, 8, 9, 0]
> [...arr1, arr2] // wrong
< [1, 2, 3, 4, 5, [6, 7, 8, 9, 0]] // gives a nested array
> [...arr1, ...arr2] // correct
< [1, 2, 3, 4, 5, 6, 7, 8, 9, 0] // merges properly
> let num = 10
> [...arr1, num] // no spreading required when it's not an array
< [1, 2, 3, 4, 5, 10] // merges properly
I often end up with data sources like (pseudo code below, not any specific syntax, it is just to illustrate):
list = {
"XLabel",
"XDescription",
"YLabel",
"YDescription",
"ZLabel",
"ZDescription"
}
desired output is:
list = {
MyClass("XLabel", "XDescription"),
MyClass("YLabel", "YDescription"),
MyClass("ZLabel", "ZDescription")
}
Is there anything more clean than to do a fold(), and fold it into a new list? I've also rejected doing something weird like list.partition().zip()
I basically want a more powerfull map that would work like mapChunks( it1, it2 -> MyClass(it1, it2)) where the chunking is part of the function so it gets easy and nice. (My example has the list in chunks of two, but 3 is also a prevalent use case.)
Does this function exist? Or what is the most idiomatic way to do this?
You can use the chunked function, and then map over the result. The syntax gets very close to what you wanted if you destructure the lambda-argument:
list.chunked(2)
.map { (it1, it2) -> MyClass(it1, it2) }
// Or use _it_ directly: .map { MyClass(it[0], it[1]) }
I think the windowed method should do what you want.
lst.windowed(size = 2, step = 2, partialWindows = false) { innerList -> MyClass(innerList[0], innerList[1]) }
You can also use chunked but it calls windowed under the hood. But with chunked you can get lists that have fewer elements than you were expecting
EDIT to answer #android developer's question about getting the indexes of the list
val lst = listOf(7, 8, 9, 10, 11, 12, 13, 14, 15, 16)
val windowedList = lst.mapIndexed { index, it -> index to it }
.windowed(size = 2, step = 2, partialWindows = false) {
it[0].first
}
println(windowedList)
Would output
[0, 2, 4, 6, 8]
To add to the existing answers, you can use chunked function with the transform lambda passed as its second argument:
list.chunked(2) { (label, description) -> MyClass(label, description) }
This way is more efficient because the temporary list of two elements is reused across all chunks.
You can create an extension function, for example mapChunks, and reuse it:
fun List<String>.mapChunks(): List<MyClass> {
return chunked(2).map { MyClass(it[0], it[1]) }
}
val list1 = listOf(
"XLabel",
"XDescription",
"YLabel",
"YDescription",
"ZLabel",
"ZDescription"
)
val result1 = list1.mapChunks()
val list2 = listOf(
"XLabel1",
"XDescription1",
"YLabel1",
"YDescription1",
"ZLabel1",
"ZDescription1"
)
val result2 = list2.mapChunks()
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/chunked.html
Chunked returns a sub list of the size you specify
This is the API call you want
considering your list is in pairs of 2 you can do this
list.chunked(2) //List<List<String>>
.map{MyClass(it[0], it[1]} //list<MyClass>
I've just started to practice Groovy and I have a question related to maps and IDEA IDE.
Why IDEA shows me the notification below when I try to use Integer as a key for a map? This simple Groovy script works fine and print correct result.
list = [4, 7, 3, 7, 7, 1, 4, 2, 4, 2, 7, 5]
map = [:]
list.each {
t = map[(it)]
map[(it)] = t != null ? t + 1 : 1
}
map.each {key, value -> if (value == 1) println key}
It is caused because IntelliJ IDEA sees map variable as Object - it seems like IDEA does not follow type inference if static type or keyword def is missing in front of the variable. If you take a look at DefaultGroovyMethods you will see that there is only one method getAt implemented for Object type:
public static Object getAt(Object self, String property) {
return InvokerHelper.getProperty(self, property);
}
This is why IDEA warns you about missing method getAt(Object self, Integer property) because it is not aware that map is actually a Map and not an Object.
Please follow the official Groovy's guideline that says:
Variables can be defined using either their type (like String) or by using the keyword def:
String x
def o
Source: http://docs.groovy-lang.org/latest/html/documentation/core-semantics.html#_variable_definition
If you define your variable as
def map = [:]
IntelliJ wont complain anymore.