Efficiently Retrieve Transactions and Holdings for an Item from Yodlee - yodlee

Every night we need to pull all of the data (holdings and transactions) in the Yodlee database for our users and store it in our own database. From what I gather there seems to be no efficient way to do this. Option 2 of Yodlee TransactionView and ItemId indicates that I should call getItemSummaryForItem1 to retrieve the ItemSummary for an Item and then subsequently run a TransactionSearch to retrieve the transactions. This makes a lot of sense if you are ONLY wanting transactions. In which case I would run the following an getItemSummaryForItem1 call:
// Create Data Extent
DataExtent dataExtent = new DataExtent();
dataExtent.startLevel = 0;
dataExtent.startLevelSpecified = true;
dataExtent.endLevel = 0;
dataExtent.endLevelSpecified = true;
// Get ItemSummary
var ItemSummary = new DataService().getItemSummaryForItem1(_userContext, itemId, true, dataExtent);
[Then the TransactionSearch would follow]
This works great and runs really quickly, but in my scenario I want holdings as well. To retrieve holdings I need to change the endLevel of the DataExtent from a 0 to a 2. However when I do that the call takes an amazingly significant amount longer AND the ItemSummary comes back with all of the transactions, which is EXTREMELY inefficient.
Is there anyway to do what I want, pull transactions and holdings for an Item, efficiently? Based on the documentation I can't seem to find a way. Thanks in advance.

Yes, there is way to avoid this.
Steps -
1)Don't set endLevel =2 and set endLevel =0
2)The DataExtent also takes an array of extentLevels , so please set that with 1st element with value as 0(zero) and 2nd element as value 2.
Sample SOAP XML
<dex xmlns="">
<startLevel>0</startLevel>
<endLevel>0</endLevel>
<extentLevels>
<elements>0</elements>
<elements>2</elements>
</extentLevels>
</dex>
Sample code -
dataExtent.setStartLevel(0);
dataExtent.setEndLevel(0);
Integer[] array = {0,2};
ArrayOfint levelArray = new ArrayOfint();
levelArray.setElements(array);
dataExtent.setExtentLevels(levelArray);
You can also check the getItemSummaryForItem1 documentation at Youdlee's developer portal

Related

How to code a simple algorithm to fetch list of data through pagination in a fresh new application?

I'm making a clone of social app. I'm using graphQL as my backend. My problem is that every time I query a list of data it is returning the same result. When I will release that app, the user base will be very small so the amount or data is less in number. So I'm facing the issue described below:
1. My data in data base is like:
I'd=1 title=hello1
I'd=2 title=hello2
I'd=3 title=hello3
2. When I'm querying data through pagination with limit=3, I'm getting list of items is like:
Query 1
I'd=1 title=hello1
I'd=2 title=hello2
I'd=3 title=hello3
3. When I'm adding new items to data base, it is invoked in between the items like below:
I'd=1 title=hello1
I'd=4 title=hello4
I'd=2 title=hello2
I'd=3 title=hello3
I'd=5 title=hello5
4. So next fresh query result(limit=3) Will be like:
Query 2
I'd=1 title=hello1
I'd=4 title=hello4
I'd=2 title=hello2
Look at the data set previously our query result was: I'd=1,2 & 3 now I'd=1,4 & 2 so the user will get same result as id=1,2 is in new list.
If I will save pagination nextToken/cursor(I'd=3) of first query(query 1) then after new data added to data base the new query will start from I'd=5, because it is present after I'd=3. Look at the new dataset it will miss I'd=4 because nextToken is saved for I'd=3 for the query will start from I'd=5. Hope you can understand.
If your suggestion is add a sort key of created at, I want say that if I will add some filter, the data set will become so much selective that might become the reason of limited number of data in feed and we know a feed should query unlimited data.

How can i improve performances when using django queryset?

I'm trying to make a news feed. Each time the page is called, server must send multiple items. One item contain a post, number of likes, number of comments, number of comment children, comments data, comment children data etc.
My problem is, each time my page is called, it takes more than 5 secondes to be loaded. I've already implemented a caching system. But it's still slow.
posts = Posts.objects.filter(page="feed").order_by('-likes')[:'10'].cache()
posts = PostsSerializer(post,many=True)
hasPosted = Posts.objects.filter(page="feed",author="me").cache()
hasPosted = PostsSerializer(hasPosted,many=True)
for post in post.data:
commentsNum = Comments.objects.filter(parent=posts["id"]).cache(ops=['count'])
post["comments"] = len(commentsNum)
comments = Comments.objects.filter(parent=posts["id"]).order_by('-likes')[:'10'].cache()
liked = Likes.objects.filter(post_id=posts["id"],author="me").cache()
comments = CommentsSerializer(comments,many=True)
commentsObj[posts["id"]] = {}
for comment in comments.data:
children = CommentChildren.objects.filter(parent=comment["id"]).order_by('date')[:'10'].cache()
numChildren = CommentChildren.objects.filter(parent=comment["id"]).cache(ops=['count'])
posts["comments"] = posts["comments"] + len(numChildren)
children = CommentChildrenSerializer(children,many=True)
liked = Likes.objects.filter(post_id=comment["id"],author="me").cache()
for child in children.data:
if child["parent"] == comment["id"]:
liked = Liked.objects.filter(post_id=child["id"],author="me").cache()
I'm trying to find a simple method to fetch all these data quicker and without unnecessary database hit. I need to reduce the loading time from 5 secs to less than 1 if possible.
Any suggestion ?
Add the number of children as a integer on the comment field that gets updated every time a comment is added or removed. That way, you won't have to query for that value. You can do this using signals.
Add an ArrayField(if you're using postgres) or something similar on your Profile model that stores all the primary keys of Liked posts. Instead of querying the Likes model, you would be able to do this:
profile = Profile.objects.get(name='me')
liked = True if comment_pk in profile.liked_posts else False
Use select_related to CommentChildren instead of making an extra query for it.
Implementing these 3 items will get rid of all the db queries being executed in the "comment in comments.data" forloop which is probably taking up the majority of the processing time.
If you're interested, check out django-debug-toolbar which enables you to see what queries are being executed on every page.

Get ALL tweets, not just recent ones via twitter API (Using twitter4j - Java)

I've built an app using twitter4j which pulls in a bunch of tweets when I enter a keyword, takes the geolocation out of the tweet (or falls back to profile location) then maps them using ammaps. The problem is I'm only getting a small portion of tweets, is there some kind of limit here? I've got a DB going collecting the tweet data so soon enough it will have a decent amount, but I'm curious as to why I'm only getting tweets within the last 12 hours or so?
For example if I search by my username I only get one tweet, that I sent today.
Thanks for any info!
EDIT: I understand twitter doesn't allow public access to the firehose.. more of why am I limited to only finding tweets of recent?
You need to keep redoing the query, resetting the maxId every time, until you get nothing back. You can also use setSince and setUntil.
An example:
Query query = new Query();
query.setCount(DEFAULT_QUERY_COUNT);
query.setLang("en");
// set the bounding dates
query.setSince(sdf.format(startDate));
query.setUntil(sdf.format(endDate));
QueryResult result = searchWithRetry(twitter, query); // searchWithRetry is my function that deals with rate limits
while (result.getTweets().size() != 0) {
List<Status> tweets = result.getTweets();
System.out.print("# Tweets:\t" + tweets.size());
Long minId = Long.MAX_VALUE;
for (Status tweet : tweets) {
// do stuff here
if (tweet.getId() < minId)
minId = tweet.getId();
}
query.setMaxId(minId-1);
result = searchWithRetry(twitter, query);
}
Really it depend on which API system you are using. I mean Streaming or Search API. In the search API there is a parameter (result_type) that is an optional parameter. The values of this parameter might be followings:
* mixed: Include both popular and real time results in the response.
* recent: return only the most recent results in the response
* popular: return only the most popular results in the response.
The default one is the mixed one.
As far as I understand, you are using the recent one, that is why; you are getting the recent set of tweets. Another issue is getting low volume of tweets that have the geological information. Because there are very few users added the geological information to their profile, you are getting very few tweets.

Magento Bulk update attributes

I am missing the SQL out of this to Bulk update attributes by SKU/UPC.
Running EE1.10 FYI
I have all the rest of the code working but I"m not sure the who/what/why of
actually updating our attributes, and haven't been able to find them, my logic
is
Open a CSV and grab all skus and associated attrib into a 2d array
Parse the SKU into an entity_id
Take the entity_id and the attribute and run updates until finished
Take the rest of the day of since its Friday
Here's my (almost finished) code, I would GREATLY appreciate some help.
/**
* FUNCTION: updateAttrib
*
* REQS: $db_magento
* Session resource
*
* REQS: entity_id
* Product entity value
*
* REQS: $attrib
* Attribute to alter
*
*/
See my response for working production code. Hope this helps someone in the Magento community.
While this may technically work, the code you have written is just about the last way you should do this.
In Magento, you really should be using the models provided by the code and not write database queries on your own.
In your case, if you need to update attributes for 1 or many products, there is a way for you to do that very quickly (and pretty safely).
If you look in: /app/code/core/Mage/Adminhtml/controllers/Catalog/Product/Action/AttributeController.php you will find that this controller is dedicated to updating multiple products quickly.
If you look in the saveAction() function you will find the following line of code:
Mage::getSingleton('catalog/product_action')
->updateAttributes($this->_getHelper()->getProductIds(), $attributesData, $storeId);
This code is responsible for updating all the product IDs you want, only the changed attributes for any single store at a time.
The first parameter is basically an array of Product IDs. If you only want to update a single product, just put it in an array.
The second parameter is an array that contains the attributes you want to update for the given products. For example if you wanted to update price to $10 and weight to 5, you would pass the following array:
array('price' => 10.00, 'weight' => 5)
Then finally, the third and final attribute is the store ID you want these updates to happen to. Most likely this number will either be 1 or 0.
I would play around with this function call and use this instead of writing and maintaining your own database queries.
General Update Query will be like:
UPDATE
catalog_product_entity_[backend_type] cpex
SET
cpex.value = ?
WHERE cpex.attribute_id = ?
AND cpex.entity_id = ?
In order to find the [backend_type] associated with the attribute:
SELECT
  backend_type
FROM
  eav_attribute
WHERE entity_type_id =
  (SELECT
    entity_type_id
  FROM
    eav_entity_type
  WHERE entity_type_code = 'catalog_product')
AND attribute_id = ?
You can get more info from the following blog article:
http://www.blog.magepsycho.com/magento-eav-structure-role-of-eav_attributes-backend_type-field/
Hope this helps you.

Equivalent BAPI for a MB01 transaction?

I'm trying to replace some un-reliable sap scripting we have in place to do an MB01 from a custom goods receipt application. I have come across the .NET connector and it looks like it could do a job for me.
Research has churned up the BAPI called BAPI_GOODSMVT_CREATE but can anyone tell me what parameters might be required to perform this transaction?
I have access to a SAP test environment.
BAPI_GOODSMVT_CREATE accepts a table of values called GOODSMVT_ITEM which contains 121 fields. I'm sure that not all of these fields are required.
Ultimately I guess my question is, what how can I work out which ones are required?
Do you have access to a SAP system? I have recently used this BAPI, and it has quite detailed documentation. To view the documentation, use transaction SE37, and enter the BAPI name. Unfortunately I don't currently have access to a system.
You will have to ask one of your MM/Logistics people to tell you what the movement type (BWART) is, and depending on the config you will need details like material number (MATNR), plant (WERKS), storage location etc.
MB01 is a Post GR for PO transaction, it is an equivalent of GM_Code 01 in MIGO or BAPI_GOODSMVT_CREATE. MIGO transaction is a modern successor for obsolete MB01.
So, as per the BAPI_GOODSMVT_CREATE documentation for GM_Code 01 the following fields are mandatory:
Purchase order
Purchase order item
Movement type
Movement indicator
Quantity in unit of entry
ISO code unit of measurement for unit of entry or
quantity proposal
Here is the sample:
gmhead-pstng_date = sy-datum.
gmhead-doc_date = sy-datum.
gmhead-pr_uname = sy-uname.
gmcode-gm_code = '01'.
loop at pcitab.
itab-move_type = pcitab-mvt_type.
itab-mvt_ind = 'B'.
itab-plant = pcitab-plant.
itab-material = pcitab-material.
itab-entry_qnt = pcitab-qty.
itab-move_stloc = pcitab-recv_loc.
itab-stge_loc = pcitab-issue_loc.
itab-po_number = pcitab-pur_doc.
itab-po_item = pcitab-po_item.
concatenate pcitab-del_no pcitab-del_item into itab-item_text.
itab-move_reas = pcitab-scrap_reason.
append itab.
endloop.
call function 'BAPI_GOODSMVT_CREATE'
exporting
goodsmvt_header = gmhead
goodsmvt_code = gmcode
IMPORTING
goodsmvt_headret = mthead
tables
goodsmvt_item = itab
return = errmsg