Need to know the server timing of retrieving the newest data from all Jawbone UP users - jawbone

could you kindly let me know the timing of Jawbone data server updates the individual newest data from all users?
And is it possible to take the newest Jawbone server data through API anytime?
I need to know above information for our Jawbone app.
Thank you for your help.

Your question is a little vague, so I'll try my best to answer it.
The API will always have the latest data available, a call to any of the endpoints will get you an authorised users recent data for each type of data tracked. I hope this helps.

Related

Don’t get any measuring data from iHealth cloud (sandbox)

I write an application that uses the api of iHealth. Scales, blood pressure monitor, and devices like that by iHealth send there data with Bluetooth and smartphone apps to the internet cloud of iHealth. Therefore a user of this devices has a user account in the iHealth internet cloud. There he can login and see his data. My app uses the iHealth api to get the data from this cloud. The user of the devices gives mi the right to access his data by OAuth 2 and after receiving the access data I ask for the data of the user with the given client id.
Well, here comes the problem. As a result I get a JSON-Object of measuring data without any data. That means there is no error message, everything seems fine, except that there are no data of this user. It's no kind of error documented here:
sandbox.ihealthlabs.com/dev_documentation_ResponseFormatAndErrors.htm
Http status is good too (200).
I don't use any optional restrictions like asking for data of only certain time.
An explication would be that the user still hasn't used his devices and the cloud therefore doesn't has any data. Unfortunately this is something I can't influence: My app is still not ready and therefore I only use the sandbox cloud offered for development (http://sandbox.ihealthlabs.com).
The sandboxuser can't use the smartphone apps and therefore I can only read the data that are yet there in the cloud. Of course I can't test without data. Who could develop without reciving data? There has to be an error. Maybe a rather silly error. I asked more than 9 days ago the support but still haven't got any answer.
Getting JSON data from the cloud with the api for blood pressure (openApiBP) (the XX-parts are abbreviated id, token, ...):
http://sandboxapi.ihealthlabs.com/openapiv2/user/d7XX..XX9f/bp.json/?client_id=a6XX..XXbe&client_secret=2bXX..XX3f&redirect_uri=http%3A%2F2Flocalhost%3A8082%2FTelemedicina%2Fdispositivos.html%3Fregreso%3DiHealth&access_token=u8XX..XXyw&sv=6cXX..XXcf&sc=deXX..XXcf
The answer to this (w/o any change) is just:
{"BPDataList":[],
"BPUnit":0,
"CurrentRecordCount":0,
"NextPageUrl":"",
"PageLength":50,
"PageNumber":1,
"PrevPageUrl":"",
"RecordCount":0}
Using the Api for Weight (OpenApiWeight) has the same problem as the OpenApiBP.
I have read the documentation more than once and searched for an explanation in the web.
As you see I ask the api and get this maybe correct but useless answer for development purposes. Any idea? What do I miss?
Update:
An iHealth Lab tecnican answered me. In the sandbox is just now user data. My way of asking and the recifed answer are therefore correct. It's not an error. To get data the application has to be registered for the real world. He didn't explain how to test with this limitation of the sandbox.
I let the the answer of the iHealth Lab api technician speak for itself:
"The sandbox does not provide any actual user data. If you want actual live data you will have to register a new application at developer.ihealthlabs.com."
If this is the answer to my question of why not reciving any data means there is really no data that I could recive.
Thanks to all that tried to help me, especially Scott Lawson. I hope this answer will help others. Knowing this a few days ago would have saved me a lot of time.

Does SoundCloud provide full dataset dumps of the API?

My research team has been carefully exploring the SoundCloud API, and has obtained a client ID. However, does SoundCloud provide dumps with complete sets of data? For example, Jamendo has nightly dumps of most of their track-related data and Magnatune has a full dump of all track data updated whenever there is a change. Does SoundCloud provide any similar full dumps of its data?
We do not provide this kind of dump of our data.

Bloomberg Session timeout?

As informed by the example in the Bloomberg APIv3, i need to start a Bloomberg session to open a service, then i need to use the service to create a request.
My question is, if my program sent a request, got the answer, and then after a while it might need to send another request. In this situation, how do i determine whether the Session/Service is still good to use to send the request, or do i need to start another session?
Does it cost much to start a session?
Dose it bother Bloomberg's server if i start and stop a session quite often?
BTW, when i'm retrieving historical data, what's the proper size of data to ask for within a single request?
Thanks a lot for your kind help!
There are many questions here. The following answers are just my opinion, your best bet is to ask bloomberg themselves from a "Help Help" in your terminal session. Tell the person at the other end that you want your question to go to the API team.
Q: How do I determine if the session is still good?
A: I don't know of any other way than using is and seeing if an exception occurs. However I have had sessions stay open for many hours perfectly happily.
Q: Does it cost much to start a session?
A: Bloomberg don't give any guidance on this, but compared with the overhead of fetching data it doesn't seem much.
Q: What is the proper size of data to ask for?
A: I believe if you ask for a lot bloomberg will break the request up for optimal transport, so you should ask for as much as you can in one request, as it will be more efficient. Beware of stepping over your data limits though.

How may I upload live feed onto my application? Where can I get currency feeds for all currencies?

Can somebody help me with this? How may I upload live feed onto my application? Where can I get currency feeds for all currencies?
You can use the free feeds from geoplugin, but I think it only provides current rate exchanges and does not allow you to query for past dates.
http://www.geoplugin.com/webservices/currency
Xe.com have a data feed service which is licenced.
http://www.xe.com/dfs/?utm_source=internal&utm_medium=239x61&utm_content=NoGeo&utm_campaign=DFS_tvglobal_239x61
Had a look around as I there are paid ones, but it depends if you want to pay for the feed or if you want a daily updated one or an hourly one ... or completely live.
I've found - http://www.ecb.int/stats/eurofxref/eurofxref-daily.xml and http://forexfeed.net/forex-data-services
Which might be useful for you.
There are some data feed providers that have free plans. I am using https://www.currencydatafeed.com, who provide a free plan with 1 minute updates and 30 minutes delay.
If you are looking for a real-time feed, they also provide the cheapest on the net.

Getting historical data from Twitter [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
For a research project I would like to get the last 3 months worth of Twitter messages. Technical challenges aside, is this possible? by using some sort of slow polling mechanism to keep the rate limiter at bay?
The Twitter API states "Clients may request up to 3,200 statuses via the page and count parameters for timeline REST API" Are these per hour? Per day? or...ever?
Any suggestions? Would it even be theoretically possible? Did some one do something similar before?
Thanks!
Marco
Twitter notoriously does not make "available" tweets older than three weeks. In some cases you can only get one week. You're better off storing tweets for the next three months. Many rightly doubt if they're even persisted by Twitter.
Are you looking for just any tweets? If so, check out the Streaming API's status/sample method. The streaming API uses persistent HTTP sockets that can be a pain to program, but it's quite graceful when you get it working. I'd recommend setting up a little script to dump tweets from status/sample into a DB. You should have a TON of data after just a few days.
You could use the Search API, don't give it a search, return the maximum of 100 per page, then got through each page twice a minute(120 times an hour - 30 times less than the rate limit). However, if my math is correct, that could possibly give you 720,000 tweets an hour..... the problem is that Twitter has added approximately 1.75 billion tweets over the past 3 months. So if my math is correct, it would take you 2361 days, or 6 years to complete this.
You could ask this question over on the Twitter Development talk on Google Groups, or contact Twitter to get white-listed so you could make up to 20,000 requests an hour.
Personally, I don't think it's possible.
DataSift claims to have a twitter historical data api coming soon, you can signup to be notified when its available here.
This may not have existed when you first asked the question but the "PeopleBrowsr" API is perfect for this and you can go back 1400 days with a single API call: https://developer.peoplebrowsr.com/pb
Hope that helps!
Keyhole can get you historical tweets in xls or present them in a visual dashboard. The preview samples only a few most recent tweets, however, you can request historical data if you email them.
See: http://keyhole.co/conversation_tracking
You can read the twitter historic data using Gnip's Historic PowerTrack tool. It will give you access to all twitter data since first tweet and fairly it is very simple tool t use.
You can get free estimates for the data scope and cost using a service built by my company called Sifter. If you decide to purchase access to the data it will be available via our text analytics platform DiscoverText, where you can search, filter, de-duplicate, cluster, human code, and machine-classify the data.