historical index performance in Bloomberg / VBA - vba

I am making a tool which plots the historical (12800 dates, 1980-today) stock performance of subsets of 3500 companies based on a set of sustainability rating criteria chosen by the user. For example, one could pick "worker health and safety" and then see the stock performances of the companies with the best and worst ratings in that area compared to the average.
After the user inputs constraints, I produce a list of Bloomberg tickers for which I want to analyze performance. Is there a way to upload such a list of tickers to Bloomberg, and have it return the historical performance data? Or even a well-documented source of help / examples? I asked the help desk but they just told me to read their documentation, which didn't prove to be of much use.
I am avoiding at all costs downloading the ~44,000,000 data points I might theoretically need (35 years of daily last price for all 3500 companies) - so any alternative ideas would be very much appreciated.

Using the Bloomberg Excel Add-In, you can automate functions through Historical Data-set Formulas. Once you have the core formula syntax down, you can easily do things like drag cells to update new tickers with similar data-sets.
Your goal appears to be something like a Stock Screener, however. In your example you mentioned that you would like to make it custom and flexible for a user to screen stocks based on a particular set of criteria.
Maybe you're trying to do something conceptually like this : https://www.google.com/finance/stockscreener
But with a wider breadth of data. What you would have to do to avoid the costs of Network Transactions and Data Storage is to be able to quickly eliminate a solid part of the Stock Screen Universe by using some data-sets or indicators that are stronger in filtering out the noise of stocks that may exhibit criteria that the user may not be interested in.

Related

Data model guidance, database choice for aggregations on changing filter criteria

Problem:
We are looking for some guidance on what database to use and how to model our data to efficiently query for aggregated statistics as well as statistics related to a specific entity.
We have different underlying data but this example should showcase the fundamental problem:
Let's say you have data of Facebook friend requests and interactions over time. You now would like to answer questions like the following:
In 2018 which American had the most German friends that like ACDC?
Which are the friends that person X most interacted with on topic Y?
The general problem is that we have a lot of changing filter criteria (country, topic, interests, time) on both the entities that we want to calculate statistics for and the relevant related entities to calculate these statistics on.
Non-Functional Requirements:
It is an offline use-case, meaning there are no inserts, deletes or
updates happening, instead every X weeks a new complete dump is imported to replace the old data.
We would like to have an upper bound of 10 seconds
to answer our queries. The faster the better max 2 seconds for queries would be great.
The actual data has around 100-200 million entries, growth rate is linear.
The system has to serve a limited amount of concurrent users, max 100.
Questions:
What would be the right database technology or mixture of technologies to solve our problem?
What would be an efficient data model for computing aggregations with changing filter criteria in several dimensions?
(Bonus) What would be the estimated hardware requirements given a specific technology?
What we tried so far:
Setting up a document store with denormalized entries. Problem: It doesn't perform well on general queries because it has to scan too many entries for aggregations.
Setting up a graph database with normalized entries. Problem: performs even more poorly on aggregations.
You talk about which database to use, but it sounds like you need a data warehouse or business intelligence solution, not just a database.
The difference (in a nutshell) is that a data warehouse (DW) can support multiple reporting views, custom data models, and/or pre-aggregations which can allow you to do advanced analysis and detailed filtering. Data warehouses tend to hold a lot of data and are generally built to be very scalable and flexible (in terms of how the data will be used). For more details on the difference between a DW and database, check out this article.
A business intelligence (BI) tool is a "lighter" version of a data warehouse, where the goal is to answer specific data questions extremely rapidly and without heavy technical end-user knowledge. BI tools provide a lot of visualization functionality (easy to configure graphs and filters). BI tools are often used together with a data warehouse: The data is modeled, cleaned, and stored inside of the warehouse, and the BI tool pulls the prepared data into specific visualizations or reports. However many companies (particularly smaller companies) do use BI tools without a data warehouse.
Now, there's the question of which data warehouse and/or BI solution to use.
That's a whole topic of its own & well beyond the scope of what I write here, but here are a few popular tool names to help you get started: Tableau, PowerBI, Domo, Snowflake, Redshift, etc.
Lastly, there's the data modeling piece of it.
To summarize your requirements, you have "lots of changing filter criteria" and varied statistics that you'll need, for a variety of entities.
The data model inside of a DW would often use a star, snowflake, or data vault schema. (There are plenty of articles online explaining those.) If you're using purely BI tool, you can de-normalize the data into a combined dataset, which would allow you a variety of filtering & calculation options, while still maintaining high performance and speed.
Let's look at the example you gave:
Data of Facebook friend requests and interactions over time. You need to answer:
In 2018 which American had the most German friends that like ACDC?
Which are the friends that person X most interacted with on topic Y?
You want to filter/re-calculate the answers to those questions based on country, topic, interests, time.
One potential dataset can be structured like:
Date of Interaction | Initiating Person's Country | Responding Person's Country | Topic | Interaction Type | Initiating Person's Top Interest | Responding Person's Top Interest
This would allow you to easily count the amount of interactions, grouped and/or filtered by any of those columns.
As you can tell, this is just scratching the surface of a massive topic, but what you're asking is definitely do-able & hopefully this post will help you get started. There are plenty of consulting companies who would be happy to help, as well. (Disclaimer: I work for one of those consulting companies :)

Pulling large quantities of data takes too long. Need a way to speed it up

I'm creating a client dashboard website that displays many different graphs and charts of different views of data in our database.
The data is of records of medical patients and companies that they work for for insurance purposes. The data is displayed as aggregate charts but there is a filter feature on the page that the user can use to filter individual patient records. The fields that they can filter by are
Date range of the medical claim
Relationship to the insurance holder
Sex
Employer groups (user selects a number of different groups they work with, and can turn them on and off in the filter)
User Lists (the user of the site can create arbitrary lists of patients and save their IDs and edit them later). Either none, one, or multiple lists can be selected. There is also an any/all selector if multiple are chosen.
A set of filters that the user can define (with preset defaults) from other, more internally structured pieces of data. The user can customize up to three of them and can select any one, or none of them, and they return a list of patient IDs that is stored in memory until they're changed.
The problem is that loading the data can take a long time, some pages taking from 30 seconds to a minute to load (the page is loaded first and the data is then download as JSON via an ajax function while a loading spinner is displayed). Some of the stored procedures we use are very complex, requiring multiple levels of nested queries. I've tried using the Query Analyzer to simplify them, but we've made all the recommended changes and it still takes a long time. Our database people have looked and don't see any other way to make the queries simpler while still getting the data that we need.
The way it's set up now, only changes to the date range and the employer groups cause the database to be hit again. The database never filters on any of the other fields. Any other changes to the filter selection are made on the front end. I tried changing the way it worked and sending all the fields to the back end for the database to filter on, and it ended up taking even longer, not to mention having to wait on every change instead of just a couple.
We're using MS SQL 2014 (SP1). My question is, what are our options for speeding things up? Even if it means completely changing the way our data is stored?
You don't provide any specifics - so this is pretty generic.
Speed up your queries - this is the best, easiest, least error-prone option. Modern hardware can cope with huge datasets and still provide sub-second responses. Post your queries, DDL, sample data and EXPLAINs to Stack Overflow - it's very likely you can get significant improvements.
Buy better hardware - if you really can't speed up the queries, figure out what the bottleneck is, and buy better hardware. It's so cheap these days that maxing out on SSDs, RAM and CPU will probably cost less than the time it takes to figure out how to deal with the less optimal routes below.
Caching - rather than going back to the database for everything, use a cache. Figure out how "up to date" your dashboards need to be, and how unique the data is, and cache query results if at all possible. Many development frameworks have first-class support for caching. The problem with caching is that it makes debugging hard - if a user reports a bug, are they looking at cached data? If so, is that cache stale - is it a bug in the data, or in the caching?
Pre-compute if caching is not feasible, you can pre-compute data. For instance, when you create a new patient record, you could update the reports for "patient by sex", "patient by date", "patience by insurance co" etc. This creates a lot of work - and even more opportunity for bugs.
De-normalize - this is the nuclear option. Denormalization typically improves reporting speed at the expense of write speed, and at the expense of introducing lots of opportunities for bugs.

Fast reporting with user parameters and temp result sets

I have come across a problem with reporting from SQL Server databases using SSRS, that I wonder if you could help me with.
When you have a huge amount of data in a table, and you want to select only those rows within a certain criteria, and you want to allow the users to specify that criteria (for example, it might be a start date and end date), and you then want to take that data (within the criteria) and perform a ton of other transformations on it, including producing various temporary result sets along the way (using CTEs or Table Variables or Temp tables) to finally produce the report, this basically takes ages in SQL. You can do it, but your users might have to wait an hour or two from the moment they've hit View Report, to their report being rendered.
I don't know much about MDX or DAX, cubes or tabular models, but I wonder if there is a quicker way to do what I want. Note the important aspect of the problem: the user is specifying a criteria that has to go all the way back to the original table, and then various transformations (including temp result sets) have to be applied to produce the final report.
What is the best way to do this? Am I doing it the only way possible? I know it's a broad question, but I'd like to know, theoretically, what the answer is. Where should I be looking? Should I be looking at Cubes? Tabular Models? Should I be using R in SQL Server?
There is always a balance when it comes to handling large datasets. Sometimes it makes sense to do some of the work ahead of time so that on-demand reports can run in a reasonable amount of time.
In order for a model to be a good option here are some general guidelines:
Many reports would be able to use common attributes from the model
The data involves aggregates, not just lists of records
The data does not need to be live
You have plenty of development and testing time
Anyone who would be using it as a data source will have to have be
trained on the structure and be at least slightly familiar with MDX
Another option for you to consider is to have a stored procedure that "prepares" the data for you overnight in a separate table. This table could be well indexed because the write time is not as important. They report would then point to this table to be able to quickly retrieve the data it needs to present. This shifts most of the preparation/aggregation work. You can still of course have parameters that limit how much of this data you pull back.
Based on the little bit of information you've given us (300 million rows in a single non-normalized table), there is definitely a faster way. However, there will not be any quick solutions and you haven't provided enough information for me to give any recommendations.
I think you may need to seek some professional help to review your infrastructure and needs along with your usage and objectives so you can be pointed in the right direction.

Best way to report comparisons of one agency to the rest of the state/nation

When attempting to do some benchmarking type reports, I run into the issue of extreme slowness due to the amount of data residing in the database, and this will get incrementally worse. I'm curious of what would be considered the best approach for reports that show for example a percentage of patients entering the hospital within a certain date range that were there due to a specific condition, as well as how that particular hospital compares to the state percentage and also the national percentage. Of course this is all based on the hospitals whose data resides in the database. I have just been writing stored procedures to calculate these percentages, but I know this isn't the best approach. I'm curious how other more experienced reporting professionals would tackle this. I'm currently using SSRS for reporting. I know a little about SSAS, but not enough to know if I should consider it for this type of reporting.
This all depends on the data-structure and the kind of calculations you have to do.
You try to narrow down the amount of data you have to process and the complexity of operations in every possible way
If you have lots of data on a slow system you first try to select the needed data, transfer it to the calculation point and then keep it cached as long as you can.
If you have huge amounts of data you try to preprocess it as much as you can. E.g. for datawarehouses you have a datetime-table with year/month/day/day-of-week/week-of-year etc in it and just constraints to them in the other tables. Like this you can avoid timeconsuming calculations.
If the operations are complex you have to analyze them to make them simpler/faster but on this point it is impossible to predict how much (if at all) there is some room.
It all depends on your understanding of the data-structure and processes you need them for, in order to improve everything as much as you can.
I myself haven't worked with SSAS yet but this is also a great tool but (imho) more for lots of different analysis.

How can I distribute a number of values Normally in Excel VBA

Sorry I know the question isnt as specific as it could be. I am currently working on a replenishment forecasting system for a clothing company (dont ask why it's in VBA). The module I am currently working on is distribution forecasts down to a size level. The idea is that the planners can forecast the number to sell, then can specify a ratio between the sizes.
In order to make the interface a bit nicer I was going to give them 4 options; Assess trend, manual entry, Poisson and Normal. The last two is where I am having an issue. Given a mean and SD I'd like to drop in a ratio (preferably as %s) between the different sizes. The number of the sizes can vary from 1 to ~30 so its going to need to be a calculation.
If anyone could point me towards a method I'd be etenaly greatfull - likewise if you have suggestions for a better method.
Cheers
For the sake of anyone searching this, whilst only a temporary solution I used probability mass functions to get ratios this allowed the user to modify the mean and SD and thus skew the curve as they wished. I could then use the ratios for my calculations. Poisson also worked with this method but turned out to be a slightly stupid idea in terms of choice.