It seems to me that the google sheets API's append method, the method which is used to add data to google sheet from your program, expects a 2d array. Sam Berlin here seems to say as much Google Sheet API batch update issue iOS. I was wondering if anyone knows why this is?
If you go the reference page for the method sheets.value.append you will see that the values that you need to introduce is an object of the type ValueRange and from the documentation you can actually see:
values[] The data that was read or to be written. This is an array of arrays, the outer array representing all the data and each inner array representing a major dimension. Each item in the inner array corresponds with one cell.
And why is done that way... well your guess is as good as mine.
Having a standardized object to represent values of a given range (input or output) is a plus when working with any API. Also in cases you want to add more than one row/column at a time you can do it easily by providing a 2-D array, when there would be as arrays as rows/columns you want to insert (as stated by the documentation).
TL;DR: 2-D array are very good analogue of the values inside a range inside a spreadsheet.
Related
apologies if this is a silly question, but I am not quite sure as to why this behavior is the case, and/or whether I am misunderstanding it. I was trying to create a function for the 'apply' method, and noticed that if you run apply on a series the series is passed as a np.array and if you pass the same series within a dataframe of 1 column, the series is passed as a series to the (u)func.
This affects the way a simpleton like me writes the function (i prefer iloc indexing to integer-based indexing on the array) so I was wondering whether this is on purpose, or historical accident?
Thanks,
I have an understanding about how multiple dimensional arrays work and how to use them except for one thing, In what situation would we need to use them and why?
Basically multi dimension arrays are used if you want to put arrays inside an array.
Say you got 10 students and each writes 3 tests. You can create an array like: arr_name[10][3]
So, calling arr_name[0][0] gives you the result of student 1 on lesson 1.
Calling arr_name[5][2] gives you the result of student 6 on test 3.
You can do this with a 30 position array, but the multi dimension is:
1) easier to understand
2) easier to debug.
Here are a couple examples of arrays in familiar situations.
You might imagine a 2 dimensional array is as a grid. So naturally it is useful when you're dealing with graphics. You might get a pixel from the screen by saying
pixel = screen[20][5] // get the pixel at the 20th row, 5th column
That could also be done with a 3 dimensional array to represent 3d space.
An array could act like a spreadsheet. Here the rows are customers, and the columns are name, email, and date of birth.
name = customers[0][0]
email = customers[0][1]
dateofbirth = customers[0][2]
Really there is a more fundamental pattern underlying this. Things have things have things... and so on. And in a sense you're right to wonder whether you need multidimensional arrays, because there are other ways to represent that same pattern. It's just there for convenience. You could alternatively
Have a single dimensional array and do some math to make it act multidimensional. If you indexed pixels one by one left to right top to bottom you would end up with a million or so elements. Divide by the width of the screen to get the row. The remainder is the column.
Use objects. Instead of using a multidimensional array in example 2 you could have a single dimensional array of Customer objects. Each Customer object would have the attributes name, email and dob.
So there's rarely one way to do something. Just choose the most clear way. With arrays you're accessing by number, with objects you're accessing by name.
Such solution comes as intuitive when you are faced with accessing a data element identified by a multidimensional vector. So if "which element" is defined by more than two "dimensions".
Good uses for 2D or Two D arrays might be:
Matrix Math i.e. rotation things in space on a plane and more.
Maps like game maps, top or side views for either actual graphics or descriptive data.
Spread Sheet like storage.
Multi Columns of display table data.
Kinds of Graphics work.
I know there could be much more, so maybe someone else can add to this list in their answers.
As an aid to learning objective c/oop, I'm designing an iOS app to store and display periodic bodyweight measurements. I've got a singleton which returns a mutablearray of the shared store of measurement object. Each measurement will have at least a date and a body weight, and I want to be able to add historic measurements.
I'd like to display the measurements in date order. What's the best way to do this? As far as I can see the options are as follows: 1) when adding a measurement - I override addobject to sort the shared store every time after a measurement is added, 2) when retrieving the mutablearray I sort it, or 3) I retrieve the mutablearray in whatever order it happens to be in the shared store, then sort it when displaying the table/chart.
It's likely that the data will be retrieved more frequently than a new datum is added, so option 1 will reduce redundant sorting of the shared store - so this is the best way, yes?
You can use a modified version of (1). Instead of sorting the complete array each time a new object is inserted, you use the method described here: https://stackoverflow.com/a/8180369/1187415 to insert the new object into the array at the correct place.
Then for each insert you have only a binary search to find the correct index for the new object, and the array is always in correct order.
Since you said that the data is more frequently retrieved than new data is added, this seems to be more efficient.
If I forget your special case, this question is not so easy to answer. There are two basic solutions:
Keep array unsorted and when you try to access the element and array is not sorted, then sort it. Let's call it "lazy sorting".
Keep array sorted when inserting elements. Note this is not about appending new element at the end and then sort the whole array. This is about finding where the element should be (binary search) and place it there. Let's call it "sorted insert".
Both techniques are correct and useful and deciding which one is better depends on your use cases.
Example:
You want to insert hundreds of elements into the array, then access the elements, then again insert hundreds of elements, then access. In summary, you will be inserting values in big chunks. In this case, lazy sorting will be better.
You will often insert individual elements and you will access the elements often. Then sorted insert will have better performance.
Something in the middle (between inserting 1 and inserting tens of elements). You probably don't care which one of the methods will be used.
(Note that you can use also specialized structures to keep an array sorted, not based on NSArray, e.g. structures based on a balanced tree, while keeping number of elements in the subtree).
I need to keep 90x90 array data for iphone app. how can i keep this data? making an multi-dimensional array is a solution for this big table. or is there an other solution.
If the matrix is always 90x90, then you should just use C arrays.
Unless you have a special need for passing the matrix around, searching using predicates, or need some other feature of NSArray, then keep it simple.
You can:
Use a single Obj-C array containing 8100 elements and map your rows and columns onto the single index yourself: index = (row * 90) + column;
Create an Obj-C array containing 90 Obj-C arrays of 90 elements each.
Hash the row and column together into a single key that you can use with a dictionary. This could be a good solution especially if the array is sparse.
Use a single- or multi-dimensional C array, especially if the elements of the array are plain old C types, like int. If you're storing objects, it's better to go with an Obj-C container.
Iphone's have a built in database SQL-Lite. I'd look into that to see if it meets you needs
I've got a very large array of data that I'm writing to a range. However, sometimes only a few elements of the array change. I believe that since I am writing the entire array to the range, all of the cells are being re-calculated. Is there any way to efficiently write a subset of the elements - specifically, those that have changed?
Update: I'm essentially following this method to save on write time:
http://www.dailydoseofexcel.com/archives/2006/12/04/writing-to-a-range-using-vba/
In particular, I have a property collection that I populate with all of the objects (they are cells) with data that I need. Then, I loop through all of the properties and write the values to an array, indexing the array so it matches the dimensions of the range that I want to write to. Finally, with TheRange.Value = TempArray I write the data in the array to she sheet. This last step overwrites the full range, I believe causing recalculations even in cells whose actual values didn't change.
Let me start with a few basics:
When you write to a range of cells, even if the values are the same, Excel still sees it as a change and will recalculate accordingly. It does not matter if you have calculation turned off, then next time the range/sheet/workbook is calculated it will recalculate everything that is dependent on that range.
As you've discovered, writing an array to a range is much, much faster than writing cell-by-cell. It is also true that reading a range into an array is much faster than reading cell-by-cell.
As to your question of only writing the subset of data that has changed, you need a fast way to identify which data has changed. This is probably obvious, but needs to be taken into account as whatever that method is will also take some time.
To write only the changed data, you can do this two ways: either go back to writing cell-by-cell or break the array into smaller chunks. The only way to know if either of these is faster than writing the whole range is to try all three methods and time them with your data. If 90% of the data is changed, writing the entire block will certainly be faster than writing cell-by-cell. On the other hand, if the changed data only represents 5%, cell-by-cell may be better. The performance is dependent on too many variables to give a one-answer-fits-all solution.