Currently I'm using Vtiger API sync operation to fetch last modified Potentials. But the issue with sync is that it fetch only those records that are assigned to the user who is performing the operation. Actually I want all the Potentials added/modified by all the users.
Now I'm trying same using Query operation. My params looks like this:
var hourAgo = Math.floor((new Date().getTime() - 60 * 60 * 1000) / 1000);
var params = "operation=query&sessionName=" + SESSION_ID + "&query=Select * from Potentials where modifiedtime > hourAgo;"
Seems like query is returning all the records instead of last modified records.
Any help would be appreciated.
Thank you.
P.s. I'm using Apps script to do this one.
Related
I need to create 2 items, use get method to check if everything is ok and after that I should delete these items.
I have 1 tc - getItem, which uses 2 helpers (postItem and deleteItem).
For getItem I need to have itemId, which I get from postItem, where this variable is defined. After that I use the same itemId for deleteItem as afterhook. What I do:
Feature:get item
Background:Pre-conditions
* url apiUrl
* call read('classpath:/helpers/features/postItem.feature')
* configure afterScenario = function(){karate.call('classpath:/helpers/features/deleteItem.feature')}
Scenario: Get items
* path '/items/'
And param id = itemId
When method Get
Then status 200
It works but I create only 1 item and delete it correctly because itemId is predefined in postItem and I`m able to re-use it. I saw how to use karate.repeat from HERE but when I do the next
* def item = function(i){ return karate.call ('classpath:/helpers/features/postItem.feature')}
I`m not able to get itemId and as a result not able to delete it. Have tried to use
* print item.response
but it is "null"
So I have 2 questions:
How to get variable from postItem
How to delete each of these created items using afterHook?
May I offer some advice. I would NOT try to create helpers and re-use them like this. Please take some time to read this, and then you may understand: https://stackoverflow.com/a/54126724/143475
I would definitely not use a hook also. Please think of the people who need to maintain your test in the future, they will cry.
Here is how I would write your test. And let me repeat, it is OK to repeat some code when your are doing test automation. For a real, working example, see here.
Background:
* url apiUrl + '/items'
Scenario:
* request {}
* method post
* status 201
* path response.id
* method get
* status 200
* request {}
* method post
* status 201
* path response.id
* method delete
# and so on
Otherwise, the only thing I will say is please refer to the documents on how you can call features and get data back in a loop without using karate.repeat() which should be used only for creating JSON arrays. You can see this answer which has an example and links to the documentation: https://stackoverflow.com/a/75394445/143475
Have found solution how can I do this using DRY pattern + afterhooks.
Feature:get items
Background:Pre-conditions
* url apiUrl
* def item = function(i){ return karate.call ('classpath:/helpers/features/postItem.feature')}
* def createdItem = karate.repeat(2, item )
* table createdItems
|itemId |
|createdItem[0].response.data.id|
|createdItem[1].response.data.id|
* configure afterScenario = function(){karate.call('classpath:/helpers/features/deleteItem.feature', createdItems )}
Scenario: Get all items
* path '/items'
When method Get
Then status 200
It works, but maybe it also can be updated. Im new in this)
So, basically, what I do:
I create 2 items, for get method using karate.repeat with calling postItem feature
I create table with itemId references
Create afterHook with calling deleteItem.feature, which should have argument itemId and I provide created table for this.
And I have scenario, which checks created items
And after that these created items are deleted by afterhooks.
As a result, I have clear scenario, which contains
Pre-conditions --> creating items (preparing data)
Scenario body --> GET method
Post-conditions --> deleting created items and returning to default state.
All of this I do because dont have DB read permission) In an ideal world, preparing data should be done via SQL and deleted as well)
Hope this will help someone)) Also, if you find better solution, feel free to write this here) Tnx
I'm developing a system to show the user all the activities in an area, i'm using Developer HERE api's using the discover api request. Discover limits at 100 activities per request and this is fine but i would like to know if it is possible to ask for the rest of the places in a second api call to get them all.
Like there are 130 resturants near my user, i first ask for the first 100 and then for the other 30 so in this way the user gets the whole picture.
The discover API does not currently support pagination to serve this requirement.
This is explained here:
https://developer.here.com/documentation/geocoding-search-api/migration_guide/migration-places/topics-api/search.html
However, you can implement pagination on client side from the discover result. There is a sample that I have created for purely reference that might help you.
Code Snippet:
function getResultCount(arr) {
let arrLength = arr.length;
let noOfPagination = arrLength / 10;
let reminder = arrLength % 10;
if (reminder > 0) {
noOfPagination = noOfPagination + 1;
}
createPagination(noOfPagination);
}
Complete Sample Example: https://jsfiddle.net/raj0665/f5w12u9s/4/
I have succeeded using Thunkable to archive old data in a Fusion Table. I would like this to be done in the background of the app using Google Apps Script.
The Thunkable Blocks with SQL is as follows:
Query 1:
SELECT ROWID FROM TableID WHERE Duration<= Clock.Now
SET GLOBAL RESULTS to List from CSV Table text (Result from Query1)
For each number from 2 to length of list by 1 DO Query 2
Query 2:
UPDATE TableID SET Availability='uNAVAILABLE' WHERE ROWID='list item 2 from result from Query 1'
Remove list item 2
Query 3:
DELETE FROM TableID WHERE Availability='Unavailable'
How can I convert this to Google Apps Script and link it to a Fusion Table? Thank you.
Per documentation,
A quick way to try out the API is to type the command or query directly into your browser's toolbar. You can adjust the URL as you change your query or data needs and you'll get immediate feedback. You can only do this with tables that are exportable and either public or unlisted, and you need to include your API key.
Here's a sample that runs a query to select all rows in a given table,
https://www.googleapis.com/fusiontables/v2/query?sql=SELECT * FROM 1KxVV0wQXhxhMScSDuqr-0Ebf0YEt4m4xzVplKd4&key={your API key}
So, to implement this using Google Apps Script, try using Class UrlFetchApp.
Fetch resources and communicate with other hosts over the Internet. This service allows scripts to communicate with other applications or access other resources on the web by fetching URLs. A script can use the URL Fetch service to issue HTTP and HTTPS requests and receive responses.
You may want to check this sample Google Apps script and fusion table query in this GitHub post for additional insights.
function readFacName(fac, city){
// public fusion table
// https://www.google.com/fusiontables/DataSource?docid=1tL67aacGcCyMfAg9PUo_-gp4qm74GDtFiCMtFg
var select = "select FACNAME from 1tL67aacGcCyMfAg9PUo_-gp4qm74GDtFiCMtFg ";
var where = "where FAC_ZIP5 = '" + fac + "' AND FAC_CITY = '" + city +"'";
var query = encodeURIComponent(select + where);
var url = "http://www.google.com/fusiontables/api/query?sql=" + query;
var response = UrlFetchApp.fetch(url, {method: "get"});
return response.getContentText();
}
function fTable() {
Logger.log(readFacName("94609","OAKLAND"));
}
I want to retrieve all the files from a cabinet (called 'Wombat Insurance Co'). Currently I am using this DQL query:
select r_object_id, object_name from dm_document(all)
where folder('/Wombat Insurance Co', descend);
This is ok except it only returns a maximum of 100 results. If there are 5000 files in the cabinet I want to get all 5000 results. Is there a way to use pagination to get all the results?
I have tried this query:
select r_object_id, object_name from dm_document(all)
where folder('/Wombat Insurance Co', descend)
ENABLE (RETURN_RANGE 0 100 'r_object_id DESC');
with the intention of getting results in 100 file increments, but this query gives me an error when I try to execute it. The error says this:
com.emc.documentum.fs.services.core.CoreServiceException: "QUERY" action failed.
java.lang.Exception: [DM_QUERY2_E_UNRECOGNIZED_HINT]error:
"RETURN_RANGE is an unknown hint or is being used incorrectly."
I think I am using the RETURN_RANGE hint correctly, but maybe I'm not. Any help would be appreciated!
I have also tried using the hint ENABLE(FETCH_ALL_RESULTS 0) but this still only returns a maximum of 100 results.
To clarify, my question is: how can I get all the files from a cabinet?
You have already accepted an answer which is using DFS.
Since your are playing with DFC, these information might help you.
DFS:
If you are using DFS, you have to aware about the number of concurrent sessions that you can consume with DFS.
I think it is 100 or 150.
DFC:
Actually there is a limit that you can fetch via DFC (I'm not sure with DFS).
Go to your DFC application(webtop or da or anything) and check the dfc.properties file.
# Maximum number of results to retrieve by a query search.
# min value: 1, max value: 10000000
#
dfc.search.max_results = 100
# Maximum number of results to retrieve per source by a query search.
# min value: 1, max value: 10000000
#
dfc.search.max_results_per_source = 400
dfc.properties.full or similar file is there and you can verify these values according to your system.
And I'm talking about the ContentServer side, not the client side dfc.properties file.
If you use ENABLE (RETURN_TOP) hint with DFC, there are 2 ways to fetch the results from the ContentServer.
Object based
Row based
You have to configure this by using the parameter return_top_results_row_based in the server.ini file.
All of these changes for the documentum server side, not for your DFC/DQL client.
Aha, I've figured it out. Using DFS with Java (an abstraction layer on top of DFC) you can set the starting index for query results:
String queryStr = "select r_object_id, object_name from dm_document(all)
where folder('/Wombat Insurance Co', descend);"
PassthroughQuery query = new PassthroughQuery();
query.setQueryString(queryStr);
query.addRepository(repositoryStr);
QueryExecution queryEx = new QueryExecution();
queryEx.setCacheStrategyType(CacheStrategyType.DEFAULT_CACHE_STRATEGY);
queryEx.setStartingIndex(currentIndex); // set start index here
OperationOptions operationOptions = null;
// will return 100 results starting from currentIndex
QueryResult queryResult = queryService.execute(query, queryEx, operationOptions);
You can just increment the currentIndex variable to get all results.
Well, the hint is being used incorrectly. Start with 1, not 0.
There is no built-in limit in DQL itself. All results are returned by default. The reason you get only 100 results must have something to do with the way you're using DFC (or whichever other client you are using). Using IDfCollection in the following way will surely return everything:
IDfQuery query = new DfQuery("SELECT r_object_id, object_name "
+ "FROM dm_document(all) WHERE FOLDER('/System', DESCEND)");
IDfCollection coll = query.execute(session, IDfQuery.DF_READ_QUERY);
int i = 0;
while (coll.next()) i++;
System.out.println("Number of results: " + i);
In a test environment (CS 6.7 SP1 x64, MS SQL), this outputs:
Number of results: 37162
Now, there's proof. Using paging is however a good idea if you want to improve the overall performance in your application. As mentioned, start counting with the number 1:
ENABLE(RETURN_RANGE 1 100 'r_object_id DESC')
This way of paging requires that sorting be specified in the hint rather than as a DQL statement. If all you want is the first 100 records, try this hint instead:
ENABLE(RETURN_TOP 100)
In this case sorting with ORDER BY will work as you'd expect.
Lastly, note that adding (all) will not only find all documents matching the specified qualification, but all versions of every document. If this was your intention, that's fine.
I've worked with DFC API (with Java) for a while but I don't remember any default limit on queries, IIRC we've always got all of the documents, there weren't any limit. Actually (according to my notes) we have to set the limit explicitly with, for example, enable (return_top 2000). (As far I know the syntax might be depend on the DBMS behind EMC Documentum.)
Just a guess: check your dfc.properties file.
I am missing the SQL out of this to Bulk update attributes by SKU/UPC.
Running EE1.10 FYI
I have all the rest of the code working but I"m not sure the who/what/why of
actually updating our attributes, and haven't been able to find them, my logic
is
Open a CSV and grab all skus and associated attrib into a 2d array
Parse the SKU into an entity_id
Take the entity_id and the attribute and run updates until finished
Take the rest of the day of since its Friday
Here's my (almost finished) code, I would GREATLY appreciate some help.
/**
* FUNCTION: updateAttrib
*
* REQS: $db_magento
* Session resource
*
* REQS: entity_id
* Product entity value
*
* REQS: $attrib
* Attribute to alter
*
*/
See my response for working production code. Hope this helps someone in the Magento community.
While this may technically work, the code you have written is just about the last way you should do this.
In Magento, you really should be using the models provided by the code and not write database queries on your own.
In your case, if you need to update attributes for 1 or many products, there is a way for you to do that very quickly (and pretty safely).
If you look in: /app/code/core/Mage/Adminhtml/controllers/Catalog/Product/Action/AttributeController.php you will find that this controller is dedicated to updating multiple products quickly.
If you look in the saveAction() function you will find the following line of code:
Mage::getSingleton('catalog/product_action')
->updateAttributes($this->_getHelper()->getProductIds(), $attributesData, $storeId);
This code is responsible for updating all the product IDs you want, only the changed attributes for any single store at a time.
The first parameter is basically an array of Product IDs. If you only want to update a single product, just put it in an array.
The second parameter is an array that contains the attributes you want to update for the given products. For example if you wanted to update price to $10 and weight to 5, you would pass the following array:
array('price' => 10.00, 'weight' => 5)
Then finally, the third and final attribute is the store ID you want these updates to happen to. Most likely this number will either be 1 or 0.
I would play around with this function call and use this instead of writing and maintaining your own database queries.
General Update Query will be like:
UPDATE
catalog_product_entity_[backend_type] cpex
SET
cpex.value = ?
WHERE cpex.attribute_id = ?
AND cpex.entity_id = ?
In order to find the [backend_type] associated with the attribute:
SELECT
backend_type
FROM
eav_attribute
WHERE entity_type_id =
(SELECT
entity_type_id
FROM
eav_entity_type
WHERE entity_type_code = 'catalog_product')
AND attribute_id = ?
You can get more info from the following blog article:
http://www.blog.magepsycho.com/magento-eav-structure-role-of-eav_attributes-backend_type-field/
Hope this helps you.