Bloomberg Session timeout? - api

As informed by the example in the Bloomberg APIv3, i need to start a Bloomberg session to open a service, then i need to use the service to create a request.
My question is, if my program sent a request, got the answer, and then after a while it might need to send another request. In this situation, how do i determine whether the Session/Service is still good to use to send the request, or do i need to start another session?
Does it cost much to start a session?
Dose it bother Bloomberg's server if i start and stop a session quite often?
BTW, when i'm retrieving historical data, what's the proper size of data to ask for within a single request?
Thanks a lot for your kind help!

There are many questions here. The following answers are just my opinion, your best bet is to ask bloomberg themselves from a "Help Help" in your terminal session. Tell the person at the other end that you want your question to go to the API team.
Q: How do I determine if the session is still good?
A: I don't know of any other way than using is and seeing if an exception occurs. However I have had sessions stay open for many hours perfectly happily.
Q: Does it cost much to start a session?
A: Bloomberg don't give any guidance on this, but compared with the overhead of fetching data it doesn't seem much.
Q: What is the proper size of data to ask for?
A: I believe if you ask for a lot bloomberg will break the request up for optimal transport, so you should ask for as much as you can in one request, as it will be more efficient. Beware of stepping over your data limits though.

Related

Is it feasible to let users run dedicated videogame servers with no user accounts?

I apologise if something like this has been answered before, I just can't figure out a good way to word my question well enough to include all details about my problem.
I'd like users to be able to host servers for my indie game in a way virtually identical to, for example, Minecraft. I don't want any official servers, the game is mostly intended to be played with friends and not random strangers.
I've thought of many ways to accomplish this but I could never solve one important detail - I want the server to be able to remember users and put them where they left off when they reconnect (give them their character, the character's inventory, etc).
But any solution I could find or think of either made it potentially very easy to steal someone's character and connect to the server pretending to be them, or required me to make players have a way to register with accounts, something I can't afford to host myself.
I guess what I need is a way for the server to send a token to a new connecting player, and then have a way to see if the player sending that token back is the same person, and not an attempt to replicate the token. That to me sounds like public key cryptography, but the game engine I'm using doesn't seem to have any libraries for that (unsurprisingly), and I certainly am not qualified to make a library like that myself. But maybe there's an easier solution I'm somehow missing.
This might be a stupid question, but I hope it's worth a try asking. Thank you in advance for any help. Sorry I was so wordy by the way.
TLDR: I want users to host game servers that can remember reconnecting players without risk of players' progress being stolen.
If you have not already, look into sessions. Session cookies. But also setting up a basic log in system with php or whatever server code your server uses is not hard, and most basic hosting provide the mysql and php needed to do a basic log in page, you just have to code it yourself.

What is a good strategy for polling Microsoft Graph API so that I don't get throttled?

My goal is to maintain "real-time" (or as close as possible) information about the email messages sent by a group of users.
My ideal solution would be to periodically query the API for messages by all users in the group. This feature is not (yet?) implemented.
My second choice would be to create subscriptions (https://graph.microsoft.io/en-us/docs/api-reference/v1.0/api/subscription_post_subscriptions) for every member in the group and then request message information after I become aware of an event. The problem is, in practice, I am only allowed to create 20-30 simultaneous subscriptions (Issues to use Webhook for Microsoft Graph API), which might not be enough.
So I'm stuck with polling all the users in a cycle. The main problem with this approach is I can't find any information how many request are "too many", ie I get throttled. I want to maximize the number of requests to minimize the time it takes to get through one cycle.
A solution that comes to mind is to develop an adaptive program that slowly decreases the time between requests until throttled, then abruptly adds some time to it until a nice balance is found and maintained. This seems like a lot of work though. Right now I'm working on the assumption that 1/second is about the highest I can safely go (0.5 seconds on average per round trip, then a cool down of another 0.5 seconds).
What is the best way to deal with an unknown throttling limit in general, and Microsoft Graph in particular?
Edit:
While I think the accepted answer is a good response, it might not be suitable for all cases. For instance, if you don't want to use the 365 API, and you don't mind using beta features, perhaps you might check out this (delta tokens), which seem to be designed for real-time syncing with the data.
The only potential downside with the accepted answer is that you still need a subscription for each user you want to track (I think...), and there are limits on those. Very curious as to how other people tackled this problem.
While still in Preview, you may want to take a look at Outlook Streaming Notifications. These APIs provide a more robust notification model than simple web hooks. You would still need to establish multiple subscriptions but I expect you'll see far less latency.

"reasonable" use of web APIs to sync data

My goal is to synchronize a web-application with an internal database. The web-application has a public API, but in order to fully synchronize the two sources I would need to make around 2000 separate API calls every time. My instinct tells me that this is excessive and possibly irresponsible, but I lack the experience to know for sure.
In this particular case the web-application is Asana, but I've encountered similar situations before with other services. Is there any way to know if you're abusing a service through excessive API calls? I know I'm not going to DOS a company like Asana, but I can't shake the feeling that there must be a better way than making ~150k requests per day.
The only other option I can think of is to update the web-service only when I know there's been a change in the database, but I'll lose a lot of capability that way.
I apologize for the subjectivity of this question, but I'm really hoping that someone can explain if there's any kind of etiquette that's expected when using public APIs.
(I work at Asana)
This is an excellent question, or rather set of questions.
You are designing a system that will repeatedly make requests for every object. What will happen as the number of objects grows? Even if your initial request rate were reasonable, this would suffer problems with scalability. A more scalable solution is one that scales with the number of changes in the system. This will also grow over time, but much more slowly - the number of changes a single user can make per day is relatively constant, but the total number of objects they've created over time grows and grows. So my first piece of advice would be to avoid doing things this way, and instead find a way to detect changes and just act on those. It would be interesting to know why you feel you'll lose capability by taking this approach.
Now, I happen to know that the Asana API does not currently provide you with any friendly mechanism to just detect changes in the system. This is a commonly requested feature and we are looking into it, though I unfortunately cannot promise a delivery date. So you might be left with no choice but to poll our system for now.
As for being polite to the API, many service providers set limits on their API usage to prevent accidental or malicious use of the API from impacting the service to their other customers -- Asana is no exception. Sometimes these limits are published, other times not, and there is no standard limit: it all depends on the service. But it is very thoughtful of you to be curious about service limitations.
That said, 150k requests per day is, for the Asana API, kind of a lot. If all of our API users gave us that much traffic, we might be serving more requests per day than Google Web Search, and we're not quite that scalable yet. :) Technically, sometimes, we might handle requests at that volume from a single user.
If you must poll, try to poll on intervals like 15 minutes. But please do not poll your entire workspace on this time period; it's likely to be too much traffic/data. We're working on trying to provide you with a better solution.
If you do happen to make too many requests of the Asana API, you will get back HTTP status code 429 instead of your desired response; you can read more about that here (https://asana.com/developers/documentation/getting-started/errors).

Using Jmeter, how to find how many simultaneous users  Web server will handle

I have downloaded the JMeter and played around it. It is working fine. I have one quick question. I need help in this.
Using JMeter tool How can I say that given webserver handles efficiently n number of user at the given second or minute.
.
Please help me as soon as possible.
Thank You
Regards
Ganapathy
Answer to that question is the very reason why we do performance testing.
We primarily want to find out how application response time grows as we increase the number of parallel users.
To find out you can start with jMeter Plugins Ultimate Thread Group to gradually add users during a test.
To visualize test results, use Response Times vs Threads graph, which also comes with jMeter Plugins.
But that graph only shows average response time for specific number of users. To include time component use Composite Graph in which you'll include number of threads (users) and response time and you'll be able to see real time how response time changes with number of users.
That's where I'd start.
Try to increase the number of requests (and threads that fire requests) and see how the answer time behaves. Part of the answer is the question which response time is acceptable for you. Also note that this will not be equal to real users as you are firing requests from a single machine and in a manner not comparable to real users requesting real pages.
Read this comparison, it may help you.

Running MTurk HITs on external website

I am implementing a website on which the recruited MTurk workers will perform tasks. I plan to recruit workers using MTurk tasks, using which I will redirect them to an external website for actual work. I have the following questions relating to this plan.
Is there any foreseeable problems with this approach of running HITs? If so, how can we mitigate them?
how should I implement the authentication procedure on my external site? For example, how can I make sure the people who come to the website to perform a specific task are indeed the same group of people recruited earlier for this particular task on MTurk?
when the workers finish the task, how should I integrate the payment procedure with MTurk based on their performance? For example, say worker is owed $3 after finishing the task on my external site, is it possible for me to tell MTurk to pay him/her this amount programmatically?
The external site will be built using Python, if such detail matters.
Any suggestions and comments based on your experiences and insights in using MTurk would be much appreciated!
I am thinking through this for a similar project of mine. I've experimented as a worker myself. Here is my plan, I hope it is of use to you. (I have not implemented it yet. It is based on an academic HIT I participated in as a worker.) Here goes:
A. Create a template that has language something like:
1. Please open this web site in a new browser window:
http://your-url.xyz.blah/tasks/${token}
2. Read and follow the instructions there.
3. After completing the task, you will receive a confirmation code. Paste
it here: [________]
B. Create some random tokens for your Mechnical Turk data file:
1A1B43B327015141
09F49F2D47823E0C
B5C49A18B3DB56F4
4E93BB63B0938728
CCE7FA60BFEB3198
...
(Generate these tokens from your app; it needs to cross-reference them.)
C. Your app extracts the token from URL, looks up the task, and does whatever it needs to do. I personally don't worry about people stumbling onto a URL, since it is a one-time use token.
D. After a user completes the task on the external web site, the external app gives a confirmation code. The confirmation code should be random and opaque. Only your application will know if any particular code corresponds to a correct or incorrect answer. In fact, if you want, the correctness may not even be determined in real time -- it could be the result of an aggregation and/or comparison across multiple submissions.
E. Write some code to interact programmatically. Take the token and confirmation code supplied from the MTurk result and make sure they match with your external app. If they don't match, reject the HIT. If they match, check the correctness in your external app and approve or reject. You might consider a bonus pay structure.
So, to answer your particular questions:
I don't anticipate problems with the approach I described. That said, Mechanical Turk is both an art and a science. Perhaps more art. Writing good questions and paying Turkers appropriately is something you have to figure out with a combination of common sense, market research, and experimentation.
See (C) above. A token is designed to only be used once. Use long enough tokens and the probability of collision becomes very low.
See (E) above. The Mechanical Turk Developer Guide is a good place to start.
Please share your results back. Or have the Turkers send StackOverflow hundreds of postcards. :)
Notes:
I'm currently exploring qualification tests. I suspect they can be very useful.
I want to get a Turker's Worker ID in my external application, but I haven't figured that part out yet. I'm reading up on it; for example: Getting workerId by assignmentId
I am thinking about using the ExternalQuestion feature from the API: "... you can host the questions on your own web site using an "external" question. ... A HIT with an external question displays a web page from your web site in a frame in the Worker's web browser. Your web page displays a form for the Worker to fill out and submit. The Worker submits results using your form, and your form submits the results back to Mechanical Turk. Using your web site to display the form gives your web site control over how the question appears and how answers are collected."
You might also find PsiTurk to be useful: "PsiTurk is an open platform for conducting custom behvioral experiments on Amazon's Mechanical Turk. ... It is intended to provide most of the backend machinery necessary to run your experiment. It uses AMT's External Question HIT type, meaning that you can collect data using any website. As long as you can turn your experiment into a website, you can run it with PsiTurk!"