I have a set of users in my system. Many of which are young users with regards to the age of their account.
I have then looked at a users event per day over the life time of the account (ie the event can occur on the 2nd day of their account existing then again on the 10th and so on all the way up to the life time of their account.)
What I am trying to do is look at the average occurrence of this event on a daily basis over all accounts, but need to take into consideration that I actually have many more young users (ie users which have not been around a long time).
I have tried a couple of statistical tricks, but am not 100% sure the best method to go about doing this.
Any point in the right direction would help.
Related
this is more of a general discussion rather than a code question.
I have a website monitoring platform whereby users of the system can input their website URL and we'll check it every X minutes based on the customer's interval, at each interval, an entry is stored as a UptimeCheck model in the Laravel 8 project with the status being down or up.
If a customer has 20 monitors, and each checks every minute, then over a 30 day period for the one customer they'd accumulate over 1 million rows.
My query, is really do I need to keep this number of rows?
The reason this number of rows is kept is so that we can present a graph showing the average website uptime.
My thinking is that if I created some kind of SVG programatically for each day and store this in the table then I wouldn't need to store as many entries, but my concern here is how would I merge SVG models into one to present a daily graph?
What kind of libraries could I use and how else might I approach this?
Unlike performance, the trick for storing uptime data is simple. You don't store it. ;)
You need to store DOWNTIME data instead. Register only unavailability events and extrapolate uptime when displaying reports.
I've been tasked to create a data visualisation dashboard that relies on me drilling into the existing database.
One report is 'revenue per available covers' - part of the calculation determining how many hours were booked against how many hours were available.
The problem is the 'hours available', currently this is stored in a schedule table that has a 1-1 link with the venue - and if admin want to update this there is a simple CRUD panel with the pre-linked field ready to complete.
I have realised that if I rely on this table at any point in the future when the schedule changes the calculations change for any historic data.
Any ideas on how to keep a 'historic' schedule with as minimum impact as possible to the database?
What you have is a (potentially) slowly-changing dimension. Basically, there are two approaches:
For each transactional record, include the hours that you are interested in.
Store the schedule with time frames, which capture the schedule at a particular point in time.
In SQL Server, I would normally go for the second option, using effDate and endDate columns to capture the period when the schedule is active.
I would suggest that you start with a deeper explanation of the concept, such as the Wikipedia page.
I am trying to download as much information from Bloomberg for as many securities as I can. This is for a machine learning project, and I would like to have the data reside locally, rather than querying for it each time I need it. I know how to download information for a few fields for a specified security.
Unfortunately, I am pretty new to Bloomberg. I've taken a look at the excel add-in, and it doesn't allow me to specify that I want ALL securities and ALL their data fields.
Is there a way to blanket download data from Bloomberg via excel? Or do I have to do this programmatically. Appreciate any help on how to do this.
Such a request in unreasonable. Bloomberg has dozens of thousands of fields for each security. From fundamental fields like sales, through technical analysis like Bollinger bands and even whether CEO is a woman and if the company abides by Islamic law. I doubt all of these interest you.
Also, some fields come in "flavors". Bloomberg allows you to set arguments when requesting a field, these are called "overrides". For example, when asking for an analyst recommendation, you could specify whether you're interested in yearly or quarterly recommendation, you could also specify how do you want the recommendation consensus calculated? Are you interested in GAAP or IFRS reporting? What type of insider buys do you want to consider? I hope I'm making it clear, the possibilities are endless.
My recommendation is, when approaching a project like you're describing: think in advance what aspects of the security do you want to focus on? Are you looking for value? growth? technical analysis? news? Then "sit down" with a Bloomberg rep and ask what fields apply to this aspect. Then download those fields.
Also, try to reduce your universe of securities. Bloomberg has data for hundreds of thousands of equities. The total number of securities (including non equities) is probably many millions. You should reduce that universe to securities that interest you (only EU? only US? only above certain market capitalization?). This could make you research more applicable to real life. What I mean is that if you find out that certain behavior indicates a stock is going to go up - but you can't buy that stock - then that's not that interesting.
I hope this helps, even if it doesn't really answer the question.
They have specific "Data Licence" products available if you or your company can fork out the (likely high) sums of money for bulk data dumps. Otherwise, as has been mentioned, there are daily and monthly restrictions on how much data (and what type of data) is downloaded via their API. These limits are not very high at all and so, by the sounds of your request, this will take a long and frustrating time. I think the daily limits are something like 500,000 hits, where one hit is one item of data, e.g. a price for one stock. So if you wanted to download only share price data for the 2500 or so US stocks, you'd only managed 200 days for each stock before hitting the limit. And they also monitor your usage, so if you were consistently hitting 500,000 each day - you'd get a phone call.
One tedious way around this is to manually retrieve data via the clipboard. You can load a chart of something (GP), right click and copy data to clipboard. This stores all data points that are on display, which you can dump in excel. This is obviously an extremely inefficient method but, crucially, has no impact on your data limits.
Unfortunately you will find no answer to your (somewhat unreasonable) request, without getting your wallet out. Data ain't cheap. Especially not "all securities and all data".
You say you want to download "ALL securities and ALL their data fields." You can't.
You should go to WAPI on your terminal and look at the terms of service.
From the "extended rules:"
There is a daily limit to the number of hits you can make to our data servers via the Bloomberg API. A "hit" is defined as one request for a singled security/field pairing. Therefore, if you request static data for 5 fields and 10 securities, that will translate into a total of 50 hits.
There is a limit to the number of unique securities you can monitor at any one time, where the number of fields is unlimited via the Bloomberg API.
There is a monthly limit that is based on the volume of unique securities being requested per category (i.e. historical, derived, intraday, pricing, descriptive) from our data servers via the Bloomberg API.
We’ve come across this question fairly often at Load Impact, so I thought I’d add it to the Stack Overflow community to make it easier to find.
How do I calculate the number of concurrent users (VUs) that I need to simulate during a load test, in order to stress my system with the same kind of traffic that it will normally see in the course of a month, week or day?
Running a load test requires that you specify how many concurrent users should be simulated during testing. In other words, how many simulated users will be active, loading things or interacting with your site/app at the same time. Unfortunately, when looking at Google Analytics for example, we only see how many visits a website has per day or per month. A site can have a million visits per month, but still only ever experience max 100 concurrent visitors.
To convert the "visits per X" metric from Google Analytics, or some other analytics system, into a "concurrent users" metric that you can use for load testing, you can use the following method.
First, find out two things:
You need the total number of visits for a short time period when your site/app is at peak traffic levels. This can easily be found via e.g. Google Analytics by seeing what the highest number of visits was for a single hour in the course of e.g. a month. Look at the day that has seen the highest number of visits, and drill down to see what hour of that day was the busiest and how many visits you had during that hour. Note this value down. I will call this value "peak_hourly_visits" in this text.
You need to know the average time a user spends interacting with your site/app. In Google Analytics this is called "Average session duration" and I will call it that in this text also, but sometimes it is called "Average time on site". If this value changes a lot for your site/app depending on which time period you look at you might want to use one of the larger values you find, to be on the safe side. We want all times in seconds, so if e.g. Google Analytics tells you "00:03:19" (3 minutes, 19 seconds) you should note down 199 as the average session duration.
When you have those two values you use this formula to calculate the number of concurrent users to use in your load test:
concurrent_users = (peak_hourly_visits * average_session_duration) / 3600
Provided that each simulated user (VU) in your load test behaves realistically (i.e. simulates a real user well), you will now be able to stress your site/app with the same kind of traffic that it normally only sees during peak traffic hours.
Is there anyway to determine how many requests or how much data you have in your remaining request limit amount for Bloomberg API?
from Bloomberg HelpDesk on April 2014 (this is valid for a basic desktop client):
We have 3 kind of limits..
You can have no more that 3500 real time fields open at the same time.
If you exceed this limit you will see "NA Limit" as error message and
you just need to delete some securities/ fields in order for the error
message to disappeared and to see the values.
We have also a daily limit. The Daily API limit is 500,000 hits/per
day. A "hit" is defined as one request for a single security/field
pairing. Therefore, if you request static data for 5 fields and 10
securities, that will translate into a total of 50 hits. so try to
refresh just the portion of the spreadsheet that really needs to be
refreshed and avoid refreshing it all or reopen it many times a day.
The last limit is a monthly limit. Our monthly limits comes from a
proprietary model. Only about 0.4% of our user database ever goes over
this limit. This limit is based on unique securities and depends on
the type of data being downloaded. For example some of the data on the
system such as intra-day is valued a little bit higher than historical
end of day for any given list of securities. We do not recommend more
than 5000 to 7000 unique identifiers per month and the limit upgrade
will only allow you to get data to complete your project. Once a
security is used once in a month then if you use it again it will not
count again towards the monthly limit.
We normally grant 2 resets per month in case you exceed your daily
limit and if you exceed your monthly limit we grant 1 extension per
month (10% more), if you breach the limit again you will then need to
wait for the midnight for the daily limit to be reset automatically or
the end of the month for the reset of the monthly.
Bloomberg do not state what the explicit limits are, and there is no programmatic way of finding out what the limits are or what proportion of your limits you have used.
The best information from Bloomberg that I have found is on the WAPI page (in the terminal). On the menus on the LHS, go to WAPI Home > API Resources > API Data Limits. There are two pages, 'Extended Rules and Usage Limits' and 'Managing Your API Data Limits' that shed some further light on the matter.
Broadly speaking... there is a daily limit of individual data requests (i.e. security/field pairs - but duplicates are counted for each request). However, your limit for subscriptions is based on the number of securities you are subscribing to concurrently - i.e. if you expect to be requesting the price of a security every 5 mins, you are much better off subscribing to that security's price. Then there is a monthly limit that is based on the number of unique securities that you are making requests for.
there is an upper limit on Bloomberg API, 500,000 hits per day.
-- information from Bloomberg Help Help
The daily limit is clearly stated - it is the monthly limit that is not to my knowledge disclosed in writing. I have been told the following in the context of discussions about Data Licence, which is one Bloomberg product for bulk data subscription. The monthly limit is expressed as a budget in $, and it is the equivalent price for your requests, priced under the Data Licence schema, which clearly is not secret if you enquire about that product. So why the secrecy about the budget? The reason it is commercially sensitive is that this budget is many times the monthly cost of the Terminal Licence, so clearly if you (a) know what it is and (b) either have access via API to the budget spent (nope) OR write software to 'count the cost' (not hard), then you could pony up a couple of terminals and vastly reduce your Data Licence spend. Bloomberg naturally frown on this sort of activity because it represents an arbitrage opportunity in their pricing model and it is not really 'playing nice'. They likewise do not like if you hit 'the wrong kind of data' too often or the monthly limit at all often, and these activities may prompt them to investigate your business model to be sure you are in compliance with all the T&C of the Data Addendum. Out of courtesy to Bloomberg I am not posting that budget number here, but you should be able to get it from your salesperson and confirm the validity of what I have said, because it may change at any time as it is not part of any contract.
I don't believe this is possible programmatically, however if you speak to the Bloomberg helpdesk they will be able to tell you whether you are near the limit, and reset it for you if necessary. Obviously they will only do that a certain number of times. I have not managed to get a definitive answer as to what the limit is, but it's designed to be large enough that you would not hit it just running spreadsheets, which have a limit of 3500 Bloomberg real-time formulas.
If you feel the download limit is not breached but you still get the error message, you can run the following steps to solve the issue:
Close Excel completely.
From the Windows "Start" menu, select All Programs > Bloomberg > Stop API Process. A command prompt window appears.
Press <Enter> to close the window.
From the Windows Start menu, select All Programs > Bloomberg > API Environment Diagnostics.
Click the Start button.
When the test is complete, if there are any red errors, click the "Repair" button.
Re-open Excel and test a formula.
500'000 data points is the approximate daily limit, however remember different types of data use up varying amounts. It is not 1 for 1. Typically requests for esoteric securities and fields will use up more data per request, than PX_LAST for AAPL US for example. Also there are different types of request, such as reference or historic, which will also consume your limit differently.
If you are requesting intra-day realtime data, these fields are typically not charged to your usage limit. Rather you have limits on how many times the realtime 'pipe' can be opened.
Bloomberg are typically very helpful at resetting your monthly data usage limit should you exceed it on an adhoc basis. This is not written company policy, but seems to be part of their customer care. If you are persistently breaching limits each month, they are likely to stop resetting your limits and try to move you to B-PIPE. But otherwise for my experience they are flexible