Bloomberg API and specifying preferred order of pricing sources - bloomberg

Is there any way within Bloomberg API to specify a preferred ordering of pricing sources through API call? By using PCS, you can specify the terminal preferred pricing sources, however I want to programatically make this call through the API.
Current you can specify something like this to explicitly specify a pricing source:
MSFT#ETPX US Equity
However, can something like this be done?
MSFT#ETPX&MSPE&ANOTHER...ETC US Equity

The users preferences will be honoured by the API call. So if you do not specify the pricing source, everything will work as expected. This is also respected by any per security datalicense requests made against that linked account.
Whether or not this is a good thing is open to debate, we used to advise everyone to never change the default settings.

You could define your priority programmatically (pseudo code):
function getPrice(ticker) {
if (getPrice("MSFT#ETPX US Equity") != null) return getPrice("MSFT#ETPX US Equity");
elseif (getPrice("MSFT#MSPE US Equity") != null) return getPrice("MSFT#MSPE US Equity");
....
}

Related

Yodlee APIs: ContentServiceInfos versus SiteInfos

There appear to be two lines of APIs for adding, authenticating and aggregating sites. Depending upon which version of the Documentation/SDK set your rep started you off on, or where in the SDK Guide you started implementing from determines where you start.
Path #1 starts at
ContentServiceTraversal which allows for the retrieval of all ContentServiceInfo (by container type (such as BANK)
ItemManagementService is used to add these items
Refresh is done through RefreshService (most API not containing the word Site)
Path #2 starts at
SiteTranversalService which allows for the retrieval of all SiteInfo (no apparent support for Container Type filter)
SiteAccountManagementService is used to add these items
Refresh is done through Refreshservice (all API containing the word Site).
From the best that I can tell the aforementioned API have a lot of functionality duplication. I have noticed certain API that exist on one branch and not the other but usually they are minor changes (e.g. things you are able to filter by).
I started off with ContentServiceInfo because the documentation and samples that our rep initially gave us started there. Additionally this API started off by providing greater granularity (e.g. simply being able to filter by Container type since we were pretty much only interested in Banks and Processor sites (which I do not believe you guys support)).
My questions are:
Do the two branches of API do the exact same thing?
Do they mostly behave the same way?
Do they back-end to the exact same
System
Data store
Scraper?
Is one line of API supposed to be deprecated sooner in the future than another?
Does one line of API have more future in terms of actually adding new or augmenting existing functionality?
Site-level addition has been introduced through Yodlee APIs to overcome the fact that though a user had bank,creditcard,loan,rewards account at the same end site, user had to provide credentials for each of these containers. Site level addition APIs try to add all these containers with only 1 set of credentials. That's the only difference between container based addition and site based addition.
As to answer your questions:
Do the two branches of API do the exact same thing?
Do they mostly behave the same way?
If you mean the aggregation functionality, Yes.Except for the fact that Site level adds/refreshes all the container(bank,creditcard,loan,rewards) and Container level can add/refresh only one container per API call, all the other behavior will remain the same.
Do they back-end to the exact same
System
Data store
Scraper?
If you are referring to the Yodlee data gathering components, Yes.
Is one line of API supposed to be deprecated sooner in the future than another?
No.Both these sets of APIs cater to different needs. If you are a company who solely rely on Creditcard data, using site level addition will be overkill as it will take longer time for the aggregation and it makes more sense to use container based addition. There is also the factor of backward compatibility, which rules out deprecation of APIs.

Commission Junction API for Local (Daily) Deals

Has anyone used Commission Junction's Product Catalog Search API for searching/fetching local deals? (BuyWithMe and KGBDeals post their deals to CJ)
There is a Yipit clone out there which uses this API. This clone was unable to categorize deals properly based on location. I was supposed to fix this issue. The problem I saw is: API's response does not contain location/city info. Therefore, deals cannot be categorized based on cities. This basically kills the purpose of local deals.
I am looking for advice from anyone who has done similar work using CJ API. May be I am missing something.
OneBigPlanet has an All-In-One API filled with all affiliate networks and daily deal providers for U.S & Canada
If you are going to use a deal aggregator API for your site/blog, you may want to take a look at this one as well.
SideBuy has recently released its version 1 API which lets the user (like yourself) connect to its comprehensive set of daily deals using several parameters to fully customize the listings. I suggest you check it out and get in touch in SideBuy's site if you need further assistance.
Disclaimer: I work for sidebuy.com.

Bloomberg API: How to get the list of Pricing Sources?

It's possible to get current pricing source for a security (PRICING_SOURCE field). Is there any field that returns the list of all available pricing sources for defined security?
Thank you.
No - it's not currently possible.
Any fields (i.e. visible in FLDS in the terminal) are accessible via the API - such as the PRICING_SOURCE field in the question.
However, there is currently no functionality like the terminal commands PCS or ALLQ in the API at the moment. There is a request for this functionality with the Bloomberg programmers, and you can contact the Bbg helpdesk to have yourself added to the request to be informed when it becomes available.
Look up your pcs enablements, find the one you want to use /then lock down the eid # from the respective ctrb page. You can request eid specific prices via api.

Google Analytics retrieve custom variables statistics

Edit refurbished the question that was not clear
New to GA, I'm looking at the way to retrieve automatically custom variables data statistics
The query would have
a start and an end dates (possibly equal)
a variable name
For instance, a Page-level variable Brand takes only three possible values, that are set by the web server, and seen by the client.
The values are Apple, Google and Microsoft.
The query to Google-Analytics could be something like (pseudo-code), provided that I use an authentication token previously acquired
...getstatistics?myToken=123&variable=Brand&datefrom=20110121&dateto=20110121
And the result could be some xml like data
<variable>Brand</variable><value>Apple</value><count>3214</count>
<variable>Brand</variable><value>Google</value><count>4321</count>
<variable>Brand</variable><value>Microsoft</value><count>1345</count>
Meaning for instance that the page-level custom variable Brand was set to the value Apple by the web server (and thus seen by the client / sent to GA) 3214 times.
What is the correct way/protocol to query values/statistics from GA, in order to get statistics related to custom variables?
So, this is my understanding of what you're doing:
You're setting page-level custom variables (important technical note: these need to be called before the _trackPageview or some other call, else they won't be tracked.)
Your code might looks something like this:
_gaq.push(['_setCustomVar', 2, 'Brand', 3]);
Now, when querying the Google Analytics API, its important to note that the slot # is very important, since the slot you're accessing is explicitly named in the query.
So, to do this, you'd need to set your dimensions to ga:customVarName2 and ga:customVarValue2, and decide what metric you're interesting it getting. You mention Page views, so you'd use ga:pageviews. (You're by no means limited to pageviews. You can use any Metric besides a couple of the AdWords specific ones.)
This query would return you all of the custom variable from this slot, and the number of pageviews associated with them.
You also mentioned you'd want to be able to filter by value.
You'd do that by setting the filter value to something like ga:customVarValue2==Apple.
You can see what a query like that would look like here in the query explorer.
Here's a sample screenshot:
Finally, all Google Analytics API queries by default require you to set a date range, so you could query that on your own.
All you need to do is decide which library you want to use as interface, and you're set to go.
Google has a handy resource, called the Google Analytics Data Explorer that can help answer a lot of your questions by letting you experiment through an interface, as long as you login with your Google Analytics credentials.
As you add parameters using their tools, the system will automatically build your URL/Query.
If that's not enough, Google also has some Interactive Examples using JavaScript. Like the Data Explorer, you can also login with your Google Analytics credentials and run the examples to see what data would be returned.
These tools are awesome because they help take the guesswork out of figuring out how to target the exact data you're searching for.

eCommerce Third Party API Data Best Practice

What would be best practice for the following situation. I have an ecommerce store that pulls down inventory levels from a distributor. Should the site, for everytime a user loads a product detail page use the third party API for the most up to date data? Or, should the site using third party APIs and then store that data for a certain amount of time in it's own system and update it periodically?
To me it seems obvious that it should be updated everytime the product detail page is loaded but what about high traffic ecommerce stores? Are completely different solutions used for that case?
In this case I would definitely cache the results from the distributor's site for some period of time, rather than hitting them every time you get a request. However, I would not simply use a blanket 5 minute or 30 minute timeout for all cache entries. Instead, I would use some heuristics. If possible, for instance if your application is written in a language like Python, you could attach a simple script to every product which implements the timeout.
This way, if it is an item that is requested infrequently, or one that has a large amount in stock, you could cache for a longer time.
if product.popularityrating > 8 or product.lastqtyinstock < 20:
cache.expire(productnum)
distributor.checkstock(productnum)
This gives you flexibility that you can call on if you need it. Initially, you can set all the rules to something like:
cache.expireover("3m",productnum)
distributor.checkstock(productnum)
In actual fact, the script would probably not include the checkstock function call because that would be in the main app, but it is included here for context. If python seems too heavyweiaght to include just for this small amount of flexibilty, then have a look at TCL which was specifically designed for this type of job. Both can be embedded easily in C, C++, C# and Java applications.
Actually, there is another solution. Your distributor keeps the product catalog on their servers and gives you access to it via Open Catalog Interface. When a user wants to make an order he gets redirected in-place to the distributor's catalog, chooses items then transfers selection back to your shop.
It is widely used in SRM (Supplier Relationship Management) branch.
It depends on many factors: the traffic to your site, how often the inventory levels change, the business impact of displaing outdated data, how often the supplers allow you to call their API, their API's SLA in terms of availability and performance, and so on.
Once you have these answers, there are of course many possibilities here. For example, for a low-traffic site where getting the inventory right is important, you may want to call the 3rd-party API on every call, but revert to some alternative behavior (such as using cached data) if the API does not respond within a certain timeout.
Sometimes, well-designed APIs will include hints as to the validity period of the data. For example, some REST-over-HTTP APIs support various HTTP Cache control headers that can be used to specify a validity period, or to only retrieve data if it has changed since last request.