How to store items position in a RecyclerView after drag&drop - android-recyclerview

I have a simple todo list app with a RecyclerView/FirestoreRecyclerAdapter/ItemTouchHelper with items A, B, C, D (see picture attached). Now I would like to effectivly store the position of these items in Firestore, initially when I add an item and when ever I vertically drag & drop them. How can I do that conceptionally or with a existing/sample code. It's important to store it in the cloud so the items position stay the same if I look a it from another device.
Some ideas about it:
The adapter positions (int) start from 0 (in this case D). When I add the item E, then this has adapter position 0, and the positions of all the other items change. So the stored position in Firestore should probably increase by 1 each time a new item is added which will be displayed at the top. But what if I have thousends of items (e.g. in a photo gallery app). Is it effective if I update the position for all the items each time I drag & drop an item?
I guess this should be a very common problem.
MainActivity of my Todo App (https://i.stack.imgur.com/o3jJ1.jpg)
Here is my code for the method ItemTouchHelper.Callback onMoved():
#Override
public void onMoved(#NonNull RecyclerView recyclerView, #NonNull RecyclerView.ViewHolder viewHolder, int fromPos, #NonNull RecyclerView.ViewHolder target, int toPos, int x, int y) {
super.onMoved(recyclerView, viewHolder, fromPos, target, toPos, x, y);
for (int maxItems = recyclerView.getChildCount(), i = 0; i < maxItems; ++i) {
RecyclerView.ViewHolder holder = recyclerView.getChildViewHolder(recyclerView.getChildAt(i));
int layoutPosition = holder.getLayoutPosition();
int adapterPosition = holder.getAdapterPosition();
TodoAdapter.TodoHolder h = (TodoAdapter.TodoHolder)holder;
String documentID = h.getDocumentID();
TextView textViewTitle = (TextView)h.itemView.findViewById(R.id.todo_title);
CharSequence title = textViewTitle.getText();
DocumentReference docRef = mFirestore.collection("todos").document(documentID);
docRef.update("position", adapterPosition);
}
}

Now I would like to effectivly store the position of these items in Firestore, initially when I add an item and when ever I vertically drag & drop them.
While this is technically possible, I don't think that Firestore is the best option for this problem since everything in Firestore in about the number of reads and writes. So everytime you change the position of an item, you'll perfrom a number of write operations that is equal to the number of remaining items. Try to take a look also at Firebase realtime database. Both work together very well.
How can I do that conceptionally
Simply by adding an order number property for each element, representing the position in the list and then order them (ascending or descending) according to that property. Once you move an element from a location to another, update the position of every remaining obejct by one.
It's important to store it in the cloud so the items position stay the same if I look a it from another device.
If you'll keep all elements ordered, then every user will be able to see the same arrangement.
So the stored position in Firestore should probably increase by 1 each time a new item is added
Right, the position will be increased by one, every time you add a new item.
Is it effective if I update the position for all the items each time I drag & drop an item?
It will definitely be very costly to update the position of an item at the beginning of a list. Let's take an example. Let's say you have a collection of 1000 items and you want to add a new item as the second item in the list. This means that you be charged with one write operation for the adding of the item plus another 999 write operations to update the position of the remaining 999 items. So in total you'll be charged with 1000 write operation. It's up to you to decide if this feet your needs or not.

Related

dataChanged signal not reaching TableView in QML

I am creating a simple TableView in QML, 5 col wide by 4 rows tall. This table connects to a QSortFilterProxyModel, which connects to a QAbstractTableModel. When I call the sort method of the QSFPM my table refreshes as expected and works great.
Now, I am trying to highlight a row of my table; the row delegate just uses BLUE as the background color instead of WHITE if that row is selected. On mouse click on the table I call my 'emitDataChanged' method in the QSFPM, and I see my "Ready to emit!" message in the debug output.
However, my table does not refresh (confirmed with console.log statements in the table). It's as if the emit signal is not reaching my table (though calling the resort method in this same class DOES cause the table to refresh).
Why is the dataChanged signal not working as expected? My emitDataChanged method is below...I can post any other code needed.
Source file companysortfilterproxymodel.cpp (descends from QSortFilterProxyModel):
void CompanySortFilterProxyModel::emitDataChanged(QItemSelection firstCellInRowToBeUpdated) {
QModelIndex topLeft, bottomRight;
if (!firstCellInRowToBeUpdated.indexes().isEmpty())
{
qDebug() << "Ready to emit!";
topLeft = createIndex(0,0);
bottomRight = createIndex(3,4);
emit dataChanged(topLeft, bottomRight);
}
}
I tried using the 2 model indices passed into my method (firstCellInRowToBeUpdate) but get the same result, so to make things simple I hard coded just trying to select the whole table.

How to Structure Lists

I am working on a vb.net auto-focus routine and have the image processing part worked out, basically I do some edge detection, convert to gray-scale and then measure the standard deviation to work out the most 'in focus' point of the image.
I have done this with a number of images, and it almost comes out as a normal distribution, now I want to start to integrate this with my microscope and a stepper motor.
The concept is that I would move through a lower and upper limit on the stepper motor, and measure the above through live-view, recording the values in a list. In my case the two things I want to record are the position, and the double standard deviation value.
I am wondering what the best way to record these are, should it be
recorded as a typed list, or a dictionary or another method?
Once I record all of these values, I would want to go through the values to conduct some simple analysis of them, so if that was the case
how would I then be able to determine the average, min, max etc?
My first attempt of storing the information was in a typed list, where I had essentially done the below;
Public ZPositions As New List(Of Zfocus)
Public Class Zfocus
Public Position As Integer
Public GreyStDev As Double
End Class
The second way was to use a dictionary;
Public ZPosition As New Dictionary(Of Integer, Double)
However in both cases, I am not sure how I can either pull out a single maximum position value (e.g. Position integer,) or from the dictionary the position value (integer) which (sort of) corrosponds to the best auto-focus position.
The Third added bonus, is to be able to pull out any postions above a
specific value, which may corrospond to having some focus information
within them for focus stacking?
Many thanks
Big thanks to jmcilhinney, this solved my issue and works a treat!
Went with a strongly typed list (the ZFocus list) and then I could do the below;
MaxPosition = ZPositions.First(Function(zp1) zp1.GreyStDev = ZPositions.Max(Function(zp2) zp2.GreyStDev))
This allowed be to set up an auto-focus routine which loops through a number of images (as a test), stores the position (e.g. image number in this case) and the intensity edge information, and at the end then pull out the strongest intensity information which forms the best auto-focus point in my case

OpenCV - Variable value range of trackbar

I have a set of images and want to make a cross matching between all and display the results using trackbars using OpenCV 2.4.6 (ROS Hydro package). The matching part is done using a vector of vectors of vectors of cv::DMatch-objects:
image[0] --- image[3] -------- image[8] ------ ...
| | |
| cv::DMatch-vect cv::DMatch-vect
|
image[1] --- ...
|
image[2] --- ...
|
...
|
image[N] --- ...
Because we omit matching an image with itself (no point in doing that) and because a query image might not be matched with all the rest each set of matched train images for a query image might have a different size from the rest. Note that the way it's implemented right I actually match a pair of images twice, which of course is not optimal (especially since I used a BruteForce matcher with cross-check turned on, which basically means that I match a pair of images 4 times!) but for now that's it. In order to avoid on-the-fly drawing of matched pairs of images I have populated a vector of vectors of cv::Mat-objects. Each cv::Mat represents the current query image and some matched train image (I populate it using cv::drawMatches()):
image[0] --- cv::Mat[0,3] ---- cv::Mat[0,8] ---- ...
|
image[1] --- ...
|
image[2] --- ...
|
...
|
image[N] --- ...
Note: In the example above cv::Mat[0,3] stands for cv::Mat that stores the product of cv::drawMatches() using image[0] and image[3].
Here are the GUI settings:
Main window: here I display the current query image. Using a trackbar - let's call it TRACK_QUERY - I iterate through each image in my set.
Secondary window: here I display the matched pair (query,train), where the combination between the position of TRACK_QUERY's slider and the position of the slider of another trackbar in this window - let's call it TRACK_TRAIN - allows me to iterate through all the cv::Mat-match-images for the current query image.
The issue here comes from the fact that each query can have a variable number of matched train images. My TRACK_TRAIN should be able to adjust to the number of matched train images, that is the number of elements in each cv::Mat-vector for the current query image. Sadly so far I was unable to find a way to do that. The cv::createTrackbar() requires a count-parameter, which from what I see sets the limit of the trackbar's slider and cannot be altered later on. Do correct me if I'm wrong since this is exactly what's bothering me. A possible solution (less elegant and involving various checks to avoid out-of-range erros) is to take the size of the largest set of matched train images and use it as the limit for my TRACK_TRAIN. I would like to avoid doing that if possible. Another possible solution involves creating a trackbar per query image with the appropriate value range and swap each in my secondary windows according to the selected query image. For now this seems to be the more easy way to go but poses a big overhead of trackbars not to mention that fact that I haven't heard of OpenCV allowing you to hide GUI controls. Here are two example that might clarify things a little bit more:
Example 1:
In main window I select image 2 using TRACK_QUERY. For this image I have managed to match 5 other images from my set. Let's say those are image 4, 10, 17, 18 and 20. The secondary window updates automatically and shows me the match between image 2 and image 4 (first in the subset of matched train images). TRACK_TRAIN has to go from 0 to 4. Moving the slider in both directions allows me to go through image 4, 10, 17, 18 and 20 updating each time the secondary window.
Example 2:
In main window I select image 7 using TRACK_QUERY. For this image I have managed to match 3 other images from my set. Let's say those are image 0, 1, 11 and 19. The secondary window updates automatically and shows me the match between image 2 and image 0 (first in the subset of matched train images). TRACK_TRAIN has to go from 0 to 2. Moving the slider in both directions allows me to go through image 0, 1, 1 and 19 updating each time the secondary window.
If you have any questions feel free to ask and I'll to answer them as well as I can. Thanks in advance!
PS: Sadly the way the ROS package is it has the bare minimum of what OpenCV can offer. No Qt integration, no OpenMP, no OpenGL etc.
After doing some more research I'm pretty sure that this is currently not possible. That's why I implemented the first proposition that I gave in my question - use the match-vector with the most number of matches in it to determine a maximum size for the trackbar and then use some checking to avoid out-of-range exceptions. Below there is a more or less detailed description how it all works. Since the matching procedure in my code involves some additional checks that does not concern the problem at hand, I'll skip it here. Note that in a given set of images we want to match I refer to an image as object-image when that image (example: card) is currently matched to a scene-image (example: a set of cards) - top level of the matches-vector (see below) and equal to the index in processedImages (see below). I find the train/query notation in OpenCV somewhat confusing. This scene/object notation is taken from http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html. You can change or swap the notation to your liking but make sure you change it everywhere accordingly otherwise you might end up with a some weird results.
// stores all the images that we want to cross-match
std::vector<cv::Mat> processedImages;
// stores keypoints for each image in processedImages
std::vector<std::vector<cv::Keypoint> > keypoints;
// stores descriptors for each image in processedImages
std::vector<cv::Mat> descriptors;
// fill processedImages here (read images from files, convert to grayscale, undistort, resize etc.), extract keypoints, compute descriptors
// ...
// I use brute force matching since I also used ORB, which has binary descriptors and HAMMING_NORM is the way to go
cv::BFmatcher matcher;
// matches contains the match-vectors for each image matched to all other images in our set
// top level index matches.at(X) is equal to the image index in processedImages
// middle level index matches.at(X).at(Y) gives the match-vector for the Xth image and some other Yth from the set that is successfully matched to X
std::vector<std::vector<std::vector<cv::DMatch> > > matches;
// contains images that store visually all matched pairs
std::vector<std::vector<cv::Mat> > matchesDraw;
// fill all the vectors above with data here, don't forget about matchesDraw
// stores the highest count of matches for all pairs - I used simple exclusion by simply comparing the size() of the current std::vector<cv::DMatch> vector with the previous value of this variable
long int sceneWithMaxMatches = 0;
// ...
// after all is ready do some additional checking here in order to make sure the data is usable in our GUI. A trackbar for example requires AT LEAST 2 for its range since a range (0;0) doesn't make any sense
if(sceneWithMaxMatches < 2)
return -1;
// in this window show the image gallery (scene-images); the user can scroll through all image using a trackbar
cv::namedWindow("Images", CV_GUI_EXPANDED | CV_WINDOW_AUTOSIZE);
// just a dummy to store the state of the trackbar
int imagesTrackbarState = 0;
// create the first trackbar that the user uses to scroll through the scene-images
// IMPORTANT: use processedImages.size() - 1 since indexing in vectors is the same as in arrays - it starts from 0 and not reducing it by 1 will throw an out-of-range exception
cv::createTrackbar("Images:", "Images", &imagesTrackbarState, processedImages.size() - 1, on_imagesTrackbarCallback, NULL);
// in this window we show the matched object-images relative to the selected image in the "Images" window
cv::namedWindow("Matches for current image", CV_WINDOW_AUTOSIZE);
// yet another dummy to store the state of the trackbar in this new window
int imageMatchesTrackbarState = 0;
// IMPORTANT: again since sceneWithMaxMatches stores the SIZE of a vector we need to reduce it by 1 in order to be able to use it for the indexing later on
cv::createTrackbar("Matches:", "Matches for current image", &imageMatchesTrackbarState, sceneWithMaxMatches - 1, on_imageMatchesTrackbarCallback, NULL);
while(true)
{
char key = cv::waitKey(20);
if(key == 27)
break;
// from here on the magic begins
// show the image gallery; use the position of the "Images:" trackbar to call the image at that position
cv::imshow("Images", processedImages.at(cv::getTrackbarPos("Images:", "Images")));
// store the index of the current scene-image by calling the position of the trackbar in the "Images:" window
int currentSceneIndex = cv::getTrackbarPos("Images:", "Images");
// we have to make sure that the match of the currently selected scene-image actually has something in it
if(matches.at(currentSceneIndex).size())
{
// store the index of the current object-image that we have matched to the current scene-image in the "Images:" window
int currentObjectIndex = cv::getTrackbarPos("Matches:", "Matches for current image");
cv::imshow(
"Matches for current image",
matchesDraw.at(currentSceneIndex).at(currentObjectIndex < matchesDraw.at(currentSceneIndex).size() ? // is the current object index within the range of the matches for the current object and current scene
currentObjectIndex : // yes, return the correct index
matchesDraw.at(currentSceneIndex).size() - 1)); // if outside the range show the last matched pair!
}
}
// do something else
// ...
The tricky part is the trackbar in the second window responsible for accessing the matched images to our currently selected image in the "Images" window. As I've explained above I set the trackbar "Matches:" in the "Matches for current image" window to have a range from 0 to (sceneWithMaxMatches-1). However not all images have the same amount of matches with the rest in the image set (applies tenfold if you have done some additional filtering to ensure reliable matches for example by exploiting the properties of the homography, ratio test, min/max distance check etc.). Because I was unable to find a way to dynamically adjust the trackbar's range I needed a validation of the index. Otherwise for some of the images and their matches the application will throw an out-of-range exception. This is due to the simple fact that for some matches we try to access a match-vector with an index greater than it's size minus 1 because cv::getTrackbarPos() goes all the way to (sceneWithMaxMatches - 1). If the trackbar's position goes out of range for the currently selected vector with matches, I simply set the matchDraw-image in "Matches for current image" to the very last in the vector. Here I exploit the fact that the indexing can't go below zero as well as the trackbar's position so there is not need to check this but only what comes after the initial position 0. If this is not your case make sure you check the lower bound too and not only the upper.
Hope this helps!

Moving all array values for one index

One word: Highscores. And Java.
Top 5 highscores for my game are stored in ArrayList of 5 indexes, I seem to understand everything except moving all elements for one index up. For example: A new player has more points than the previously ranked 1st player, so he replaces him. Now the previously first player is second, the second is third and so on.
If I understood correctly your situation, the following code should do what you are asking for:
ArrayList<Integer> highscores = new ArrayList<>();
//...add elements to array
int newHighscore = 1000;
/* add the new highscore to the first index of the array and automatically
increment the indices of the elements that are after it */
highscores.add(0, newHighscore);
//remove the last highscore from the list
highscores.remove(highscores.size() - 1);
If you want a more detailed example, I can expand on it.

Random letters instead of numbers in dynamic text AS 2.0

So I'm trying to program a game using flash, and it's my very first time and I can't get something to work.
In the game, a ball will float across the screen and if you click on it you get 2 points. Except when I test it, the first time I click on the ball I get the letters 'eoceeeo' and if I click the ball again I get the letters 'eeoS'. The dynamic text is on a layer with the first frame having the AS of
var _root.score = 0;
gameScore.text = _root.score;
The dynamic text has a varible of _root.score and a name of gameScore
The floating ball has the AS of
on(release) { _root.score+=2; _root.gameScore.text = _root.score; }
If you click on your gameScore dynamic text field, you can scroll down to its Variable property and set that as _root.score. That way, you do not have to call gameScore.text = _root.score every time the score changes - it will simply update automatically.
Also, if you remove the var from in front of _root.score = 0, it will be easier for ActionScript to handle. Perhaps, you are casting the score variable as an integer, and the dynamic text field is having trouble displaying it as a string of characters. This can also be solved with String(_root.score) and score.toString().
That should make your code a bit less complex, and help for you to identity your random letters problem, which can't be solved specifically with the information you have here. Hope that helps!