How to unify data with customers when checking API monthly? - api

We're an enterprise selling SaaS. Monthly, our company and client company have to check database to pay with usages.
But problem is having distance between both. For example: from 01-01-2022 to 01-31-2022, we check database that have 10,000 API calls. But their database have 9,500 API calls.
We have check together, maybe their database miss some data when recording logs.
Anyone have better solutions to unify data with customers?

Related

tool / API for extracting football matches result to automatically insert into database

I have HANA database on SAP Neo Cloud Platform for storing results of Football events (matches of Euro, World Cup, Championship League, Premier Leagues).
Each Cup type has its own table (one for Euro, one for World Cup ... ), in which I store results of all seasons (example, Euro 2020, 2016, 2012 ...).
As there are much data coming almost every day I need an automatic way for inserting new rows with results, which will be done on Live or every day at the same time.
Is there a tool, which will extract data and generate inserts for database and which I can deploy to SAP Cloud Platform for automatic processing or Open WebService/Database, with which I can work comfortably to automatically do it myself?
Thank you for answer.
Depending on your data source (OData service, SOAP service, ...) you could simply use Smart Data Integration for SAP Cloud Platform to get the data into your database.
you can use an api that give you all the matches real time
check out this one
https://rapidapi.com/hmoccupe/api/sports-scores
We are a provider of betting website solutions in Korea. All you need is a stable API or websocket real-time data without a web UI.
Real-time and pre-match API data such as country, league, match, team, market (how to play), and odds are required.
The data should have game results (statistics and live animation). Live data should have low latency, live data should have at least
Soccer, basketball, volleyball, baseball, ice hockey, etc. It is good if there are many sports.
You must provide a testable API and quote.
The request frequency (qps) is relatively high. It is best to pay monthly and not limit the frequency of requests.

Azure Basic Tier DTUs - Different Rates for Basic Tier (B DTUs)

According to the following link: Azure Database Pricing the hourly rate for a Basic Tier DTU service plan for a single database model is priced at $0.0068.
The recent billing invoice for my (Pay as you go) Subscription stated that the hourly rate for a 'Single Basic B DTUs' SQL Database is listed as $0.1610.
Without trying to get ahold of Microsoft, does someone know why their site advertises the basic DTU hourly rate at one price and have a different rate on invoices? Not sure if anyone has encountered something like this, just trying to get an idea why there was a change?
Any guidance or clarification would be greatly appreciated.
Double check your invoice, that price is quite close to the $0.1613 S7 plan. Also note the price varies per region, so check the region as well. If you still can't come up with an explanation, simply open up a support ticket from within the Azure portal, they typically respond in a reasonable time and will happily resolve billing discrepancies.

How to download a lot of data from Bloomberg?

I am trying to download as much information from Bloomberg for as many securities as I can. This is for a machine learning project, and I would like to have the data reside locally, rather than querying for it each time I need it. I know how to download information for a few fields for a specified security.
Unfortunately, I am pretty new to Bloomberg. I've taken a look at the excel add-in, and it doesn't allow me to specify that I want ALL securities and ALL their data fields.
Is there a way to blanket download data from Bloomberg via excel? Or do I have to do this programmatically. Appreciate any help on how to do this.
Such a request in unreasonable. Bloomberg has dozens of thousands of fields for each security. From fundamental fields like sales, through technical analysis like Bollinger bands and even whether CEO is a woman and if the company abides by Islamic law. I doubt all of these interest you.
Also, some fields come in "flavors". Bloomberg allows you to set arguments when requesting a field, these are called "overrides". For example, when asking for an analyst recommendation, you could specify whether you're interested in yearly or quarterly recommendation, you could also specify how do you want the recommendation consensus calculated? Are you interested in GAAP or IFRS reporting? What type of insider buys do you want to consider? I hope I'm making it clear, the possibilities are endless.
My recommendation is, when approaching a project like you're describing: think in advance what aspects of the security do you want to focus on? Are you looking for value? growth? technical analysis? news? Then "sit down" with a Bloomberg rep and ask what fields apply to this aspect. Then download those fields.
Also, try to reduce your universe of securities. Bloomberg has data for hundreds of thousands of equities. The total number of securities (including non equities) is probably many millions. You should reduce that universe to securities that interest you (only EU? only US? only above certain market capitalization?). This could make you research more applicable to real life. What I mean is that if you find out that certain behavior indicates a stock is going to go up - but you can't buy that stock - then that's not that interesting.
I hope this helps, even if it doesn't really answer the question.
They have specific "Data Licence" products available if you or your company can fork out the (likely high) sums of money for bulk data dumps. Otherwise, as has been mentioned, there are daily and monthly restrictions on how much data (and what type of data) is downloaded via their API. These limits are not very high at all and so, by the sounds of your request, this will take a long and frustrating time. I think the daily limits are something like 500,000 hits, where one hit is one item of data, e.g. a price for one stock. So if you wanted to download only share price data for the 2500 or so US stocks, you'd only managed 200 days for each stock before hitting the limit. And they also monitor your usage, so if you were consistently hitting 500,000 each day - you'd get a phone call.
One tedious way around this is to manually retrieve data via the clipboard. You can load a chart of something (GP), right click and copy data to clipboard. This stores all data points that are on display, which you can dump in excel. This is obviously an extremely inefficient method but, crucially, has no impact on your data limits.
Unfortunately you will find no answer to your (somewhat unreasonable) request, without getting your wallet out. Data ain't cheap. Especially not "all securities and all data".
You say you want to download "ALL securities and ALL their data fields." You can't.
You should go to WAPI on your terminal and look at the terms of service.
From the "extended rules:"
There is a daily limit to the number of hits you can make to our data servers via the Bloomberg API. A "hit" is defined as one request for a singled security/field pairing. Therefore, if you request static data for 5 fields and 10 securities, that will translate into a total of 50 hits.
There is a limit to the number of unique securities you can monitor at any one time, where the number of fields is unlimited via the Bloomberg API.
There is a monthly limit that is based on the volume of unique securities being requested per category (i.e. historical, derived, intraday, pricing, descriptive) from our data servers via the Bloomberg API.

Yodlee- Transaction Storage

I'm creating a Fin App which uses Yodlee service to access user's transaction information. I want to display this information to user each time when they log in. So I'm wondering if my app should store this transaction information in the database after the initial successful API query or should the app query the API each time the user log in. I can see either way works but I'm wondering what's the standard way Fin App developer uses. And if so, is what's the advantage/disadvantage?
As mentioned, there are two ways to display the transactions to the user.
1. Query the API each time and then display the transactions to the user.
Pros:
You need not to have a DB infrastructure to store the transactions.
Easy to implement.
Cons:
You need to dependent on Yodlee every time you want to display transactions to the user.
Depending upon how many number of day's/transactions you display to the user, may cause issues as response would be huge depending upon the number of transactions user will have.
In case due to some network issue your App won't be able to connect with Yodlee then better User Experience could be questioned.
2. Query and store the transactions and then display it from your local database.
Pros:
You can query and store the user's transactions and even do analytic on that.
Shouldn't cause any issue if user has much more transactions, you can put customer queries to display the transactions.
You could use Procedural Data Extracts to keep your data in sync with Yodlee i.e., have your latest data.
Cons:
You need to implement your own transactions reconciliation logic.
Have to setup DB infrastructure.
These are the high level pros and cons of both the approaches, while it depends upon what solution you are building and how/what options you will be going to provide users to see the transactions in the App.

What database solution will you suggest for competitive online tickets sale

Can you please give me an database design suggestion?
I want to sell tickets for events but the problem is that the database can become bootleneck when many user what to buy simultaneously tickets for the same event.
if I have an counter for tickets left for each event there will be more updates on this field (locking) but I will easy found how much tickets are left
if I generate tickets for each event in advance it will be hard to know how much tickets are left
May be it will be better if each event can use separate database (if the requests for this event are expected to be high)?
May be reservation also have to asynchronous operation?
Do I have to use relation database (MySQL, Postgres) or no relation database (MongoDB)?
I'm planing to use AWS EC2 servers so I can run more servers if I need them.
I heard that "relation databases don't scale" but I think that I need them because they have transactions and data consistency that I will need when working with definite number of tickets, Am I right or not?
Do you know some resources in internet for this kind of topics?
If you sell 100.000 tickets in 5 minutes, you need a database that can handle at least 333 transactions per second. Almost any RDBMS on recent hardware, can handle this amount of traffic.
Unless you have a not so optimal database schema and/of SQL, but that's another problem.
First things first: when it comes to selling stuff (ecommerce), you really do need a transactional support. This basically excludes any type of NoSQL solutions like MongoDB or Cassandra.
So you must use database that supports transactions. MySQL does, but not in every storage engine. Make sure to use InnoDB and not MyISAM.
Of cause many popular databases support transactions, so it's up to you which one to choose.
Why transactions? Because you need to complete a bunch of database updates and you must be sure that they all succeed as one atomic operation. For example:
1) make sure ticket is available.
2) Reduce the number of available tickets by one
3) process credit card, get approval
4) record purchase details into database
If any of the operations fail you must rollback the previous updates. For example if credit card is declined you should rollback the decreasing of available ticket.
And database will lock those tables for you, so there is no change that in between step 1 and 2 someone else tries to purchase a ticket but the count of available tickets has not yet been decreased. So without the table lock it would be possible for a situation where only 1 ticket is left available but it is sold to 2 people because second purchase started between step 1 and step 2 of first transaction.
It's essential that you understand this before you start programming ecommerce project
Check out this question regarding releasing inventory.
I don't think you'll run into the limits of a relational database system. You need one that handles transactions, however. As I recommended to the poster in the referenced question, you should be able to handle reserved tickets that affect inventory vs tickets on orders where the purchaser bails before the transaction is completed.
your question seems broader than database design.
first of all, relational database will scale perfectly well for this. You may need to consider a web services layer which will provide the actual ticket brokering to the end users. here you will be able to manage things in a cached manner independent of the actual database design. however, you need to think through the appropriate steps for data insertion, and update as well as select in order to optimize your performance.
first step would be to go ahead and construct a well normalized relational model to hold your information.
second, build some web service interface to interact with the data model
then put that into a user interface and stress test for many simultaneous transactions.
my bet will be you need to then rework your web services layer iteratively until you are happy - but your database (well normalized) will not be cusing you any bottleneck issues.