I wasn't sure how exactly to word the title but this is the situation. I have 2 lists, DBList is a list of DB values and NewList is the new list of values to be stored in the DB. Now the tricky part is that I am only adding values to DBList that don't already exist, BUT if DBList contains values that NewList doesn't then i want to remove them
Essentially, NewList becomes DBList but i want to retain all the applicable previously existing data in DBList that is already persisted to the database
This is what I have and it works, but I want to know if there is a better way to do it.
List<DeptMajors> DBList;
List<DeptMajors> NewList;
for(DeptMajors dm : NewList) {
if(!DBList.contains(dm)) {
DBList.add(dm);
}
}
Iterator<DeptMajors> i = DBList.iterator();
while(i.hasNext()) {
DeptMajors dm = i.next();
if(!NewList.contains(dm)) {
i.remove()
}
}
So the first loops puts all data from NewList into DBList that doesn't already exist. Then the next loop will check if DBList contains data that doesn't exist in NewList and removes it from DBList
Ok, so I had to make up a DeptMajors class:
import groovy.transform.*
#TupleConstructor
#ToString(includeNames=true)
#EqualsAndHashCode(includes=['id'])
class DeptMajors {
int id
String name
int age
}
This class is equal if the id matches (and no other fields)
We can then make a dbList (lower case initial char for variables, else Groovy can sometimes get confused and think it's a class)
def dbList = [
new DeptMajors(1, 'tim', 21),
new DeptMajors(2, 'raymond', 20)
]
And a newList which contains an updated raymond (which will be ignored), a new entry alice (which will be added) and no tim (so that will be removed)
def newList = [
new DeptMajors(2, 'raymond', 30),
new DeptMajors(3, 'alice', 28)
]
We can then work out our new merged list. This is the intersection of dbList and newList (so we keep raymond in the original state), added to the new elements in newList which can be found by taking dbList away from it:
def mergedList = dbList.intersect(newList) + (newList - dbList)
This give the result I think you want?
assert mergedList == [
new DeptMajors(2, 'raymond', 20), // unchanged
new DeptMajors(3, 'alice', 28) // new entry (tim is removed)
]
Edit
Or as BZ says in the comments, you could also use:
def mergedList = newList.collect { e -> dbList.contains(e) ? dbList.find { it == e }: e}
Or the shorter:
def mergedList = newList.collect { e -> dbList.find { it == e } ?: e}
Related
I have a class called Person
data class Person(
val id: Int,
val name: String
)
data class IDs(
val id : Int,
val active : Boolean )
and an array list that has numbers of ids and another list of Persons
val myStu = listOf<Person>(Person(1, "Name_1"), Person(2, "Name_2"), Person(3, "Name_3"))
var ids = listOf<IDs>(IDs(1,false),IDs(2,true),IDs(3,true))
var newIds = listOf<Int>(2,3,4,6)
First I want to apply two actions to the myStu, first is to have a list that include all the items from myStu that his id matches the id in the IDS and only if the active is true
myStu or the new list will have the values
Person(2, "Name_2"), Person(3, "Name_3"))
Then do action two , I need to add a new item to the new list that their id does not exist in the newIds , in another word we will add a new person Person(4,"None") and (6,"None) , 4 and 6 values come from newIds list
the final output will be :
id= 2 name = "Name_2", id= 3 name = "Name_3", id= 4 name = "None" , id =6 name="None"
I want to write the code with filter , I failed with first step because I don't know how to use contains() with the list inside the filter
val newArr = myStu.filter {
ids.contains(it.id)
}
The "easiest" way of doing that would be to use filter directly, there's no need for contains. If we were to use contains, then we would need to also search for which element contained the id, in order to get the status. We can just do a .any() to do both at the same time.
V1
val activeStu = myStu.filter { person -> ids.any { it.id == person.id && it.active } }
val result = newIds.map { newId ->
activeStu.find { it.id == newId } ?: Person(id = newId, name = "None")
}
Another method, that might work a bit better if we have big lists, would be to first transform the IDs list into a map. That way the second part of our code is a bit more efficient, since there is no search involved.
V2
val idsMap = ids.associate { it.id to it.active }
val activeStu = myStu.filter { idsMap[it.id] ?: false }
//the creation of the result list is the same
Version without creating 2 new lists. This works, but it might be quite ineficient processing wise, and also harder to understand what is going on IMO.
V3
val result = newIds.map { newId ->
//try to find an IDs with the current newId and status as true
when (ids.find { it.id == newId }?.active) {
//if found, then find the corresponding Person
true -> myStu.find { it.id == newId } ?: Person(newId, "None") // if this happens, it means that an IDs with status true existed and no Person had that id. Not sure what you want in this scenario, this version creates one of the "none" persons.
//if not found, then create a new one
else -> Person(newId, "None")
}
}
Note: depending on what version of kotlin you have, you might have to change the when statement to this:
when (ids.find { it.id == newId }?.active == true)
Since I think I remember that null didn't used to be treated as false in old versions (I've run this with version 1.4.20).
Btw, you can also use this version with the idsMap from V2, just replace the when(...) with when(idsMap[newId] or when(idsMap[newId] == true) depending on the kotlin version.
Kotlin has some pretty cool functions for collections. However, I have come across a problem in which the solution is not apparent to me.
I have a List of Objects. Those Objects have an ID field which coincides with a SQLite database. SQL operations are performed on the database, and a new list is generated. How can the index of an item from the new list be found based on the "ID" field (or any other field for that matter)?
the Collection.find{} function return the object, but not the index.
indexOfFirst can find the index of the first element of a collection that satisfies a specified predicate.
We have a DB SQlite that a call is made to to retrieve parentList We can obtain the items in the ArrayList with this type of code
fun onDoIt(view: View){
initDB()
for (t in 0..X-1) {
var N:String = parentList[t].dept
// NOTE two syntax here [t] and get(t)
if(t == 1){
var B:String = parentList[0].idD.toString()
println("$$$$$$$$$$$$$$$$$$$$$ ====== "+B)
}
var I:String = parentList.get(t).idD.toString()
println("################### id "+I+" for "+N)
}
}
private fun initDB() {
parentList = db.querySPDept()
if (parentList.isEmpty()) {
title = "No Records in DB"
} else {
X = parentList.size
println("**************************************** SIZE " + X)
title = "SP View Activity"
}
}
I've a list of items. I want to process a set of items which are in the middle of the list.
Ex: Assume a list of employees who have id, first name, last name and middle name as attributes.
I want to consider all rows between lastName "xxx" and "yyy" and process them further.
How can this be optimized in Java8? Optimization is my first concern.
Tried using Java8 streams and parallel streams. But termination(break) is not allowed in foreach loop in Java8 streams. Also we cannot use the outside("start" variable below) variables inside foreach.
Below is the code which I need to optimize:
boolean start = false;
for(Employee employee: employees) {
if(employee.getLastname().equals("yyy")) {
break;
}
if(start) {
// My code to process
}
if(employee.getLastname().equals("xxx")) {
start = true;
}
}
What is the best way to handle the above problem in Java8?
That is possible in java-9 via (I've simplified your example):
Stream.of(1, 2, 3, 4, 5, 6)
.dropWhile(x -> x != 2)
.takeWhile(x -> x != 6)
.skip(1)
.forEach(System.out::println);
This will get the values in the range 2 - 6, that is it will print 3,4,5.
Or for your example:
employees.stream()
.dropWhile(e -> e.getLastname().equals("xxx"))
.takeWhile(e -> e.getLastname().equals("yyy"))
.skip(1)
.forEach(....)
There are back-ports for dropWhile and takeWhile, see here and here
EDIT
Or you can get the indexes of those delimiters first and than do a subList (but this assumes that xxx and yyy are unique in the list of employees):
int[] indexes = IntStream.range(0, employees.size())
.filter(x -> list.get(x).getLastname().equals("xxx") || list.get(x).getLastname().equals("yyy"))
.toArray();
employees.subList(indexes[0] + 1, indexes[1])
.forEach(System.out::println);
So I have an arraylist basket that stores items, each item is made out of the name of the item and the price of the item fields.
As you can see there's two sugar with same price.
I want my code to print every single item with the amount of time its repeated.
What I want it to do is this:
Count the duplicate values
2 x Sugar for 100
1 x Cake for 75
1 x Salt for 30
1 x Fanta for 50
My Item class toString's method is
public String toString(){
return name + " for " + price;
}
basket = new ArrayList<Item>();
basket.add(new Item("Sugar", 100));
basket.add(new Item("Sugar", 100));
basket.add(new Item("Cake", 75));
basket.add(new Item("Salt", 30));
basket.add(new Item("Fanta", 50));
HashSet<Item> set = new Hashset(basket)
for (Item item : set ){
System.out.println(Collections.frequency(basket, item) + " x" + item);
}
But what it does is...
1 x Sugar for 100
1 x Sugar for 100
1 x Cake for 75
1 x Salt for 30
1 x Fanta for 50
so i'm thinking that its comparing the toStrings but the repeated ones are not equalling to true.
please help.
This is my first ever post and don't really know the exact rules of posting
It would be better to use a Map in this scenario. First, make sure you are implementing the equals(Object) and hashCode() methods in your Item object, then putting each unique Item into the map with a value of an AtomicInteger with base value 1. Before putting in the object in the map you should check to make sure the item is not already in the map, if it already exists in the map, get the AtomicInteger and increment it.
Something like this..
Map<Item, AtomicInteger> quantitiesByItem = new HashMap<>();
for (Item item : listOfItems) {
if (!quantitiesByItem.contains(item)) {
quantitiesByItem.put(item, new AtomicInteger(1));
} else {
quantitiesByItem.get(item).incrementAndGet();
}
}
A HashSet actually prevents duplicates so would only be useful in a scenario where you are trying to strip duplicate records.
EDIT: You could also use Collections.frequency but you need to implement equals(Object) and hashCode() in your Item object otherwise the objects are determined to be different because they aren't the exact same object. You will also need to stop adding all your values from your list to a set and instead just pass your List object to Collections.frequency method.
This is what my code was missing in my Item class;
You need to override the equals and hashcode methods that every objects have by default. This will allow you to compare two objects by their state.
object A with to private fields e.g. int field and a String field
with object B. If both fields are the same there return value will be true
#Override
public boolean equals(Object obj){
//check for null
if(this == null){
return false;
}
//make sure obj is a item
if(getClass() != obj.getClass()){
return false;
}
//cast obj as an item
final Item passsedItem = (Item) obj;
//check fields
if(!Objects.equals(this.name, passsedItem.name)){
return false;
}
if(!Objects.equals(this.price, passsedItem.price)){
return false;
}
return true;
}
/**
* Override hashcode method to be able to compare item state to see if they are the same
* #return
*/
#Override
public int hashCode() {
int hash = 7;
hash = 31 * hash + Objects.hashCode(this.price);
return hash;
}
If I have a field x, that can contain a value of y, or z etc, is there a way I can query so that I can return only the values that have been indexed?
Example
x available settable values = test1, test2, test3, test4
Item 1 : Field x = test1
Item 2 : Field x = test2
Item 3 : Field x = test4
Item 4 : Field x = test1
Performing required query would return a list of:
test1, test2, test4
I've implemented this before as an extension method:
public static class ReaderExtentions
{
public static IEnumerable<string> UniqueTermsFromField(
this IndexReader reader, string field)
{
var termEnum = reader.Terms(new Term(field));
do
{
var currentTerm = termEnum.Term();
if (currentTerm.Field() != field)
yield break;
yield return currentTerm.Text();
} while (termEnum.Next());
}
}
You can use it very easily like this:
var allPossibleTermsForField = reader.UniqueTermsFromField("FieldName");
That will return you what you want.
EDIT: I was skipping the first term above, due to some absent-mindedness. I've updated the code accordingly to work properly.
TermEnum te = indexReader.Terms(new Term("fieldx"));
do
{
Term t = te.Term();
if (t==null || t.Field() != "fieldx") break;
Console.WriteLine(t.Text());
} while (te.Next());
You can use facets to return the first N values of a field if the field is indexed as a string or is indexed using KeywordTokenizer and no filters. This means that the field is not tokenized but just saved as it is.
Just set the following properties on a query:
facet=true
facet.field=fieldname
facet.limit=N //the number of values you want to retrieve
I think a WildcardQuery searching on field 'x' and value of '*' would do the trick.
I once used Lucene 2.9.2 and there I used the approach with the FieldCache as described in the book "Lucene in Action" by Manning:
String[] fieldValues = FieldCache.DEFAULT.getStrings(indexReader, fieldname);
The array fieldValues contains all values in the index for field fieldname (Example: ["NY", "NY", "NY", "SF"]), so it is up to you now how to process the array. Usually you create a HashMap<String,Integer> that sums up the occurrences of each possible value, in this case NY=3, SF=1.
Maybe this helps. It is quite slow and memory consuming for very large indexes (1.000.000 documents in index) but it works.