Remove values with some similarities in a map using kotlin streams - kotlin

I have a map<String, MyObject> where the values in the map could have same values for some variables (e.g., name parameter in my example). I would appreciate any solution using streams to remove entries with same name parameter on the value and keep only one of them with minimum id.
data class MyObject(val id: Int, val name: String)
For instance my map could be:
[
"first" to MyObject(1, "Alice"),
"second" to MyObject(2, "Bob"),
"third" to MyObject(3, "Alice")
]
and the expected output is:
[
"first" to MyObject(1, "Alice"),
"second" to MyObject(2, "Bob")
]
where the entry with key third is removed because the value has the same name as the first entry.

First, we need to identify all of the duplicate candidates. We can do that with groupBy, which works on any iterable (and Map is iterable with iteratee type Entry<K, V>).
myMap.entries
.groupBy({ entry => entry.value.name })
This produces a value of type Map<String, List<Entry<String, MyObject>>>.
Now, for each value in the map, we want to choose the element in the list with the smallest ID. We can select the minimum element by some condition using minBy and can do that to each element of a map with mapValues.
myMap.entries
.groupBy({ entry => entry.value.name })
.mapValues({ entry => entry.value.minBy({ it.value.id })!! })
(Note: groupBy always produces nonempty lists, since it's partitioning a set, so we can confidently !! assert that a minimum exists)
Finally, this returns a Map<String, Entry<String, MyObject>>, and you probably want to eliminate the excess Map layer.
myMap.entries
.groupBy({ entry -> entry.value.name })
.mapValues({ entry -> entry.value.minBy({ it.value.id })!! })
.values
.associate({ it.key to it.value })
Try it online!

There are multiple ways to do this using pure Kotlin, here is one relying on the fact that hash-maps do not allow duplicates. I am sure there are better solutions out there:
values.toList()
// If you care about the smaller id number value
// then sort by descending so they replace larger values.
.sortedByDescending { it.second.id }
// Will replace duplicates by hashing technique
.associateBy { it.second.name }
// Back to the same data structure
.map { it.value.first to it.value.second }.toMap()
Try it online!

Related

How to filter list of tuples?

Hey guys I am kinda new to Nextflow I would like to parse a channel that is a list of tuples that look like that:
[ID, [[Type1, file, file], [[Type2, file, file],(...)]
I would like to filter it to contain only tuples with Type1 to get:
[ID, [[Type1, file, file]]
What would be the best approach? I tried .filter() however obviously it returns a full list as soon as it detects Type1 without removing Type2.
You can use the map operator and the Groovy findAll() method with a closure to find the tuples where "Type1" is the first element. For example:
workflow {
...
your_channel.map { id, the_list ->
tuple( id, the_list.findAll { it.first() == "Type1" } )
}
}

how to swap places of strings in a kotlin list?

i have a list in my project like
val list1 = listOf("1", "pig", "3", "cow")
and i need to swap it to like "pig", "1", "cow", "3"
and the number and words will be random so it cant be only on these words
can anyone tell me how to do this?
You can combine chunked() to get pairs of items and then flatMap() to swap them and recreate a flat list:
list1
.chunked(2)
.flatMap { listOf(it[1], it[0]) } // or: it.reversed()
However, it looks pretty weird that you have a list like this in the first place. If items of this list are stored in two subsequent indexes, then such design complicates maintaining and processing of the data. Instead, create a data class for both fields and create a list of such data items:
val list = listOf(Animal("1", "pig"), Animal("3", "cow"))
data class Animal(
val id: String,
val name: String,
)

Using RxJava to generate a map where keys are values of a Kotlin enum and map's values come from another RxJava stream

Introduction
Let's say I have a Kotlin enum class:
enum class Type {
ONE,
TWO,
THREE,
FOUR
}
and a following data class:
data class Item(
val name: String,
val type: Type
)
Then I have a Single that emits a list of Items – can by anything but for example purposes, let's say it looks like that:
val itemsSingle = Single.just(listOf(
Item("A", Type.ONE),
Item("B", Type.ONE),
Item("C", Type.TWO),
Item("D", Type.THREE),
))
Problem
What I'd like to achieve is to have an RxJava stream that will output a map where keys come from Type and values are lists of Items matching a given Type value (where an undetermined, unsorted list of Items is provided by some other Single stream). The signature would be:
Single<Map<Type, List<Item>> // or Observable<Map<Type, List<Item>>
One additional requirement is that the map's keys should always exhaust all values from Type enum even if the itemsSingle stream contains no items for some Type values (or no items at all). So, for the provided itemsSingle example stream the returned map should look like this:
{
ONE: [ Item(name: "A", type: ONE), Item(name: "B", type: ONE) ],
TWO: [ Item(name: "C", type: TWO) ],
THREE: [ Item(name: "D", type: THREE) ],
FOUR: []
}
Attempt
With all the above, I've kinda achieved the desired result with following steps:
To satisfy the requirement of exhausting all Type enum values I first create a map that has an empty list for all possible Type values:
val typeValuesMap = Type.values().associate { it to emptyList<Item>() }
val typeValuesMapSingle = Single.just(typeValuesMap)
// result: {ONE=[], TWO=[], THREE=[], FOUR=[]}
I can get a map that contains items from itemsSingle grouped under respective Type value keys:
val groupedItemsMapSingle = itemsSingle.flattenAsObservable { it }
.groupBy { it.type }
.flatMapSingle { it.toList() }
.toMap { list -> list[0].type } // the list is guaranteed to have at least one item
// result: {ONE=[Item(name=A, type=ONE), Item(name=B, type=ONE)], THREE=[Item(name=D, type=THREE)], TWO=[Item(name=C, type=TWO)]}
finally I can combine both lists using the combineLatest operator and overwriting initial empty list of items for a given Type value if itemsSingle contained any Items for this Type value:
Observable.combineLatest(
typeValuesMapSingle.flattenAsObservable { it.entries },
groupedItemsMapSingle.flattenAsObservable { it.entries }
) { a, b -> listOf(a, b) }
.defaultIfEmpty(typeValuesMap.entries.toList()) // in case itemsSingle is empty
.flatMapIterable { it }
.collect({mutableMapOf<Type, List<Item>>()}, { a, b -> a[b.key] = b.value})
// result: {FOUR=[], ONE=[Item(name=A, type=ONE), Item(name=B, type=ONE)], THREE=[Item(name=D, type=THREE)], TWO=[Item(name=C, type=TWO)]}
Summary
As you can see, it's quite a lot of code for a seemingly simple operation. So my question is – is there a simpler way to achieve the result I'm after?
Just merge a map of empty lists with a map of filled lists
val result = itemsSingle.map { items->
Type.values().associateWith { listOf<Item>() } + items.groupBy { it.type }
}

Ramda.js - how to view many values from a nested array

I have this code:
import {compose, view, lensProp, lensIndex, over, map} from "rambda";
let order = {
lineItems:[
{name:"A", total:33},
{name:"B", total:123},
{name:"C", total:777},
]
};
let lineItems = lensProp("lineItems");
let firstLineItem = lensIndex(0);
let total = lensProp("total");
My goal is to get all the totals of all the lineItems (because I want to sum them). I approached the problem incrementally like this:
console.log(view(lineItems, order)); // -> the entire lineItems array
console.log(view(compose(lineItems, firstLineItem), order)); // -> { name: 'A', total: 33 }
console.log(view(compose(lineItems, firstLineItem, total), order)); // -> 33
But I can't figure out the right expression to get back the array of totals
console.log(view(?????, order)); // -> [33,123,777]
That is my question - what goes where the ????? is?
I coded around my ignorance by doing this:
let collector = [];
function collect(t) {
collector.push(t);
}
over(lineItems, map(over(total, collect)), order);
console.log(collector); // -> [33,123,777]
But I'm sure a ramda-native knows how to do this better.
It is possible to achieve this using lenses (traversals), though will likely not be worth the additional complexity.
The idea is that we can use R.traverse with the applicative instance of a Const type as something that is composable with a lens and combines zero or more targets together.
The Const type allows you to wrap up a value that does not change when mapped over (i.e. it remains constant). How do we combine two constant values together to support the applicative ap? We require that the constant values have a monoid instance, meaning they are values that can be combined together and have some value representing an empty instance (e.g. two lists can be concatenated with the empty list being the empty instance, two numbers can be added with zero being the empty instace, etc.)
const Const = x => ({
value: x,
map: function (_) { return this },
ap: other => Const(x.concat(other.value))
})
Next we can create a function that will let us combine the lens targets in different ways, depending on the provided function that wraps the target values in some monoid instance.
const foldMapOf = (theLens, toMonoid) => thing =>
theLens(compose(Const, toMonoid))(thing).value
This function will be used like R.view and R.over, accepting a lens as its first argument and then a function for wrapping the target in an instance of the monoid that will combine the values together. Finally it accepts the thing that you want to drill into with the lens.
Next we'll create a simple helper function that can be used to create our traversal, capturing the monoid type that will be used to aggregate the final target.
const aggregate = empty => traverse(_ => Const(empty))
This is an unfortunate leak where we need to know how the end result will aggregated when composing the traversal, rather than simply knowing that it is something that needs to be traversed. Other languages can make use of static types to infer this information, but no such luck with JS without changing how lenses are defined in Ramda.
Given you mentioned that you would like to sum the targets together, we can create a monoid instance that does exactly that.
const Sum = x => ({
value: x,
concat: other => Sum(x + other.value)
})
This just says that you can wrap two numbers together and when combined, they will produce a new Sum containing the value of adding them together.
We now have everything we need to combine it all together.
const sumItemTotals = order => foldMapOf(
compose(
lensProp('lineItems'),
aggregate(Sum(0)),
lensProp('total')
),
Sum
)(order).value
sumItemTotals({
lineItems: [
{ name: "A", total: 33 },
{ name: "B", total: 123 },
{ name: "C", total: 777 }
]
}) //=> 933
If you just wanted to extract a list instead of summing them directly, we could use the monoid instance for lists instead (e.g. [].concat).
const itemTotals = foldMapOf(
compose(
lensProp('lineItems'),
aggregate([]),
lensProp('total')
),
x => [x]
)
itemTotals({
lineItems: [
{ name: "A", total: 33 },
{ name: "B", total: 123 },
{ name: "C", total: 777 }
]
}) //=> [33, 123, 777]
Based on your comments on the answer from customcommander, I think you can write this fairly simply. I don't know how you receive your schema, but if you can turn the pathway to your lineItems node into an array of strings, then you can write a fairly simple function:
const lineItemTotal = compose (sum, pluck ('total'), path)
let order = {
path: {
to: {
lineItems: [
{name: "A", total: 33},
{name: "B", total: 123},
{name: "C", total: 777},
]
}
}
}
console .log (
lineItemTotal (['path', 'to', 'lineItems'], order)
)
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script>
<script> const {compose, sum, pluck, path} = R </script>
You can wrap curry around this and call the resulting function with lineItemTotal (['path', 'to', 'lineItems']) (order), potentially saving the intermediate function for reuse.
Is there a particular reason why you want to use lenses here? Don't get me wrong; lenses are nice but they don't seem to add much value in your case.
Ultimately this is what you try to accomplish (as far as I can tell):
map(prop('total'), order.lineItems)
you can refactor this a little bit with:
const get_total = compose(map(prop('total')), propOr([], 'lineItems'));
get_total(order);
You can use R.pluck to get an array of values from an array of objects:
const order = {"lineItems":[{"name":"A","total":33},{"name":"B","total":123},{"name":"C","total":777}]};
const result = R.pluck('total', order.lineItems);
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script>

How to group objects by values

I have an object that looks like this:
data class Product(val name: String,
val maker: List<String>)
Currently, the response that I receive from the backend (and it can't be changed) is as follows:
[{"name":"Car", "maker":["Audi"]},
{"name":"Car", "maker":["BMW"]},
{"name":"Motorcycle", "maker":["Yamaha"]},
{"name":"Motorcycle", "maker":["Kawasaki"]}
]
The actual list consists of a lot of data, but the name field can be trusted to be grouped by.
What would be a way for me to map this data so that the end result is something like this:
[{"name":"Car", "maker":["Audi", "BMW"]},
{"name":"Motorcycle", "maker":["Yamaha","Kawasaki"]}
]
Just use groupBy { ... } and then process the groups map entries, replacing them with a single Product:
val result = products.groupBy { it.name }.entries.map { (name, group) ->
Product(name, group.flatMap { it.maker })
}