Finding the Size of a Model in Google App Maker - datasource

Ive been trying to figure out how Google App Maker works with models by trying to write a simple button to return the length(number of records) that exists within a model I've created and loaded temporary data into (which should have about 150 records).
I'm working with a model called Generic Logs that has ten different
app.models.GenericLogs.fields._values.length - Returns 10
alert(app.models.GenericLogs.fields.Id.maxValue) - Returns null
alert(app.models._values.length) - Returns 2 (I have a second model)
alert(app.models.GenericLogs.datasources._values.length) - Returns 1
I definitely want to get the 150+ response for all of the records (non-unique)

Option 1:
Set your datasource limit setting to 0 and do console.log(app.datasources.YourDatasource.items.length). The downside to this is that all records will be returned to the client and might slow down your UI.
Option 2:
Create a server function -
function YourFunction() {
var query = app.models.YourModel.newQuery();
query.limit = 0;
var results = query.run();
return results.length;
}
Create a button in your client and attach the following to the onClick event google.script.run.withSuccessHandler(function (serverresult) {console.log(serverresult)}).YourFunction()
Reference: https://developers.google.com/appmaker/scripting/server#querying_records

Related

How to code a simple algorithm to fetch list of data through pagination in a fresh new application?

I'm making a clone of social app. I'm using graphQL as my backend. My problem is that every time I query a list of data it is returning the same result. When I will release that app, the user base will be very small so the amount or data is less in number. So I'm facing the issue described below:
1. My data in data base is like:
I'd=1 title=hello1
I'd=2 title=hello2
I'd=3 title=hello3
2. When I'm querying data through pagination with limit=3, I'm getting list of items is like:
Query 1
I'd=1 title=hello1
I'd=2 title=hello2
I'd=3 title=hello3
3. When I'm adding new items to data base, it is invoked in between the items like below:
I'd=1 title=hello1
I'd=4 title=hello4
I'd=2 title=hello2
I'd=3 title=hello3
I'd=5 title=hello5
4. So next fresh query result(limit=3) Will be like:
Query 2
I'd=1 title=hello1
I'd=4 title=hello4
I'd=2 title=hello2
Look at the data set previously our query result was: I'd=1,2 & 3 now I'd=1,4 & 2 so the user will get same result as id=1,2 is in new list.
If I will save pagination nextToken/cursor(I'd=3) of first query(query 1) then after new data added to data base the new query will start from I'd=5, because it is present after I'd=3. Look at the new dataset it will miss I'd=4 because nextToken is saved for I'd=3 for the query will start from I'd=5. Hope you can understand.
If your suggestion is add a sort key of created at, I want say that if I will add some filter, the data set will become so much selective that might become the reason of limited number of data in feed and we know a feed should query unlimited data.

HERE How to get more than 100 places with discover

I'm developing a system to show the user all the activities in an area, i'm using Developer HERE api's using the discover api request. Discover limits at 100 activities per request and this is fine but i would like to know if it is possible to ask for the rest of the places in a second api call to get them all.
Like there are 130 resturants near my user, i first ask for the first 100 and then for the other 30 so in this way the user gets the whole picture.
The discover API does not currently support pagination to serve this requirement.
This is explained here:
https://developer.here.com/documentation/geocoding-search-api/migration_guide/migration-places/topics-api/search.html
However, you can implement pagination on client side from the discover result. There is a sample that I have created for purely reference that might help you.
Code Snippet:
function getResultCount(arr) {
let arrLength = arr.length;
let noOfPagination = arrLength / 10;
let reminder = arrLength % 10;
if (reminder > 0) {
noOfPagination = noOfPagination + 1;
}
createPagination(noOfPagination);
}
Complete Sample Example: https://jsfiddle.net/raj0665/f5w12u9s/4/

How to query module (Potentials) on modified time using Vtiger API

Currently I'm using Vtiger API sync operation to fetch last modified Potentials. But the issue with sync is that it fetch only those records that are assigned to the user who is performing the operation. Actually I want all the Potentials added/modified by all the users.
Now I'm trying same using Query operation. My params looks like this:
var hourAgo = Math.floor((new Date().getTime() - 60 * 60 * 1000) / 1000);
var params = "operation=query&sessionName=" + SESSION_ID + "&query=Select * from Potentials where modifiedtime > hourAgo;"
Seems like query is returning all the records instead of last modified records.
Any help would be appreciated.
Thank you.
P.s. I'm using Apps script to do this one.

How to do math operation with columns on grouped rows

I have Event model with following attributes (I quoted only problem related attributes), this model is filled periodically by API call, calling external service (Google Calendar):
colorid: number # (0-11)
event_start: datetime
event_end: datetime
I need to count duration of grouped events, grouped by colorid. I have Event instance method to calculate single event duration:
def event_duration
((event_end.to_datetime - event_start.to_datetime) * 24 * 60 ).to_i
end
Now, I need to do something like this:
event = Event.group(:colorid).sum(event_duration)
But this doesnot work for me, as long as I get error that event_duration column doesnot exists. My idea is to add one more attribute to Event model "event_duration", and count and update this attribute during Event record creation, in this case I would have column called "event_duration", and I might be ale to use sum on this attribute. But I am not sure this is good and "system solution", as long as I would like to have model data reflecting "raw" received data from API call, and do all math and statistics on the top of model data.
event_duration is instance method (not column name). error was raised because Event.sum only calculates the sum of certain column
on your case, I think it would be easier to use enumerable methods
duration_by_color_id = {}
grouped_events = Event.all.group_by(&:colorid)
grouped_events.each do |colorid, events|
duration_by_color_id[colorid] = events.collect(&:event_duration).sum
end
Source :
Enumerable's group_by
Enumerable's collect

twiiter4j when to STOP when no more tweets available?

So, I've figured out how to be able to get more than 100 tweets, thanks to How to retrieve more than 100 results using Twitter4j
However, when do I make the script stop and print stop when maximum results have been reached? For example, I set
int numberOfTweets = 512;
And, it finds just 82 tweets matching my query.
However, because of:
while (tweets.size () < numberOfTweets)
it still continues to keep on querying over and over until I max out my rate limit of 180 requests per 15 seconds.
I'm really a novice at java, so I would really appreciate if you could show me how to resolve this by modifying the first answer script at How to retrieve more than 100 results using Twitter4j
Thanks in advance!
You only need to modify things in the try{} block. One solution is to check whether the ID of the last tweet you found on the previous loop(previousLastID) in the while is the same as the ID of the last tweet (lastID) in the new batch collected (newTweets). If it is, it means the new batch's elements already exist in the previous array, and that that we have reached the end of possible tweets for this hastag.
try {
QueryResult result = twitter.search(query);
List<Status> newTweets = result.getTweets();
long previousLastID = lastID;
for (Status t: newTweets)
if (t.getId() < lastID) lastID = t.getId();
if (previousLastID == lastID) {
println("Last batch (" + tweets.size() + " tweets) was the same as first. Stopping the Gathering process");
break;
}