Bitcoin arbitrage collection formula - bitcoin

https://cryptorank.io/price/bitcoin/arbitrage
I am working on displaying the various currencies like Bitcoin arbitrage URL shared above.
There are number of the records with +9.53%, +7.7% against the other currencies. I tried hard to find out the formulae for this calculation but I was unable to do this.
I asked this question if anyone worked on this type of the problem before might helpful for me to get the idea on this.
Looking forward for your suggestions!

The percentages as shown on this page are calculated based on the exchange rates of the row and column (i.e. of different exchanges):
1 - (exchangeRateOfRow / exchangeRateOfColumn)
E.g. the 11.5% in the first cell of your screenshot:
1 - (19,536 / 22,069) = 0.115 = 11.5%
I don't want to make any statement about whether these are real arbitrage opportunities.

Related

How to bypass default parameter to include a range or better SQL?

EDITED (AGAIN): added tables and two screenshots (one of Google Sheets Chart and another showing mutliple issues in DS) to help demonstrate what I am seeing.
Short Version: I have created a parameter to help me score trending topics based on the date range filter. However, I want to be able to show a range of dates' worth of data, not just a specific date's worth of data. In theory, I could make the parameter a checklist with a huge range, but that doesn't seem efficient or sustainable down the road.
Disclaimer: I am about a week into SQL and Data Studio.
Long Version: We are tracking trends over time from a specific customer data set. I'd like to make it so that when a user adjusts the time range, various topics’ " score " depends on the end date. For instance, every time the topic "Recession" is brought up, it is given a score. That score is weighted based on when it was said. I was using 365 as the highest possible score so that anything over a year is null. So if "Recession" is referenced twice, once a week ago and once today, the avg score for recession is 361.5, but if a reference is made to the topic "Talent Management" twice today, then it would have a score of 365, and so forth across a growing list of 50+ topics pertaining to 50+ specific communities we are tracking the topics across.
Here is an example:
topics
groups
entry_date
recession
A
2022-11-24
talent mgt
A
2022-11-24
recession
B
2022-11-22
economy
A
2022-11-22
recession
C
2022-11-15
talent mgt
B
2022-11-8
This score would then affect the bubble size on a chart where the Y-axis is the count of unique groups referencing the topics, and an x-axis based on the range of average scores.
The goal is to be able to see which topics are the most common across groups, which ones are emerging trends, and which ones are dated trends by having a range slider. That way users (colleagues in other departments) can play with the date range "see" the bubbles moving in location and size.
example of static chart in google sheets
I could then also use the same data and fields to measure the percentage of topics being discussed across groups based on the weighted averages against a time range.
In Goolge Sheets I can do this with an xLookUp to a tab that has a column of 0-365 and then next to it a column of 365-0 (on a tab called 'scales') and then a cell on a sheet that you can put any date as the point in time, and it affects all the scores, tables, charts, etc. (I used. =xlookup((point_in_time - entry_date), 'scales'!A:A, 'scales'!B:B, "0")
In the data studios custom SQL I used:
SELECT
*
FROM
`qRaw_data'
where
DATE(_entry_dates_) between
parse_date('%Y%m%d', #DS_START_DATE) and
parse_date('%Y%m%d', #DS_END_DATE)
AND
#pit_date_diff = date_diff(
parse_date('%Y%m%d', #ds_end_date),
_entry_dates_,
day
)
Then I created a field that is time_score of:
avg((Pit_Date_Diff-365)*(-1))
I have been googling and youtubing like crazy and think I either have to come up with a way to override the #pit_date_diff default value OR I need to use a CASE WHEN in the custom query where each time the date_diff is 1 then 365, and so on, but when I try that I get all sorts of errors.
I would like below to include all topics averaged based on all entry dates, not just those that correlate with the inputted parameter field.
currently, I can only show specific entry dates due to the parameter
I appreciate any and all help. I am a week into using data studio and am going cross-eyed Googling and YouTubing things. There is likely a better logical path to accomplish all this. Hoping for a holiday miracle.
Thanks in advance.
It turns out this was much easier than I realized... I added an AS syntax to create a column and then created a field that created the same metrics that I had in the Google Sheets:
SELECT
*,
(date_diff(parse_date('%Y%m%d', #ds_end_date), _entry_dates_,day)) AS q_time_diff
FROM
`qRaw_data`
Then the score field is: (avg(q_time_diff)-365)*(-1)
In case that helps any others in the future... ¯\(ツ)/¯
Happy Holidays!

Commodity Term Structure Data from Bloomberg

I am looking for a way to download daily WTI crude oil (CL) term structure data from Bloomberg in the Excel Add-in.
My goal is to create a table that looks like this: For each futures contract, I want the lat price and the days to maturity.
Date
CL1
CL2
...
1.1.12
PX_Last
FUT_ACT_DAYS_EXP
PX_Last
FUT_ACT_DAYS_EXP
I have tried the FUT_ACT_DAYS_EXP code but for historical price series it is not available. Is there maybe a different way to receive the term structure data with days to maturity from BB?
Many thanks!
The generic contract CL1 Comdty has a historic field of FUT_CUR_GEN_TICKER. So you can pull back a timeseries of PX_LAST and FUT_CUR_GEN_TICKER.
Then you can feed these underlying contract tickers into a BDP call for LAST_TRADEABLE_DT and subtract the PX_LAST date. You can hide the intermediate columns if they are not needed.
And the final result, with the intermediate column D hidden:
NB. I'm using array functions here (note the # symbols in the formulae), rather than hard-coding ranges. It makes it more flexible if you want to change the history range. The multiple BDP calls are unnecessary but the Bloomberg addin may be caching them in any case. If performance is an issue you can use the UNIQUE() function to get a list of the underlying contract names into a lookup table.
It's been a few years since I looked at this but you are using generic tickers, whereas FUT_ACT_DAYS_EXP would most likely expect an actual contract, e.g. CLU1. There is a field that you can use to convert generic to actual as a time series, i.e. with BDH, but you would have to check that on FLDS. Once you have that you can use BDH to pull in price and days to expiry. Bear in mind that with one BDH per date this would be a very inefficient approach with regards to limits .
Alternatively ask HELP HELP and they should give you a solid answer. Unfortunately I don't access to the terminal anymore but I used to be very involved in building such analytics.

Grand totals row not summing in Google Data Studio

Well, I'm absolute newbie in Google Data Studio, but for any reason, my grand totals rows is not working.
I'm learning to use this tool, and I made an easy table with just countries and sessions.
Piece of Cake. Now I just want to add a total row where it sums all sessions. That's all. I activated option Show Summary Row but it shows nothing.
Thing's I've done and not worked:
Update and refresh
Changed time period and tried different dates just in case.
Delete and create again full table.
Checked connection. I get data and the data is right, I just cannot sum it.
Changed size and format of table, just in case it where a problems or margins or font color.
And I know it can be done, because different sources. I've read this question here:
Grand Total is wrong in Google Data Studio
But it did not help. In that question, a user posted an image in the comments:
As you can see, he managed to get what I'm trying to do.
So I must be doing something wrong, and I do not why.
UPDATE 2: If I apply a filter, I get no totals. You can see my config in the right side of image.
Can anybody give me a clue of how to make a grand totals row in Google Data Studio?
Thanks
Sounds like a bug. It should be a case of selecting that tick box. Strangely, I looked at an existing table I have with totals and when I unticked the box and then ticked again, the totals didn't reappear and disappeared off another table on the page (like your example). They did reappear eventually with some refreshing of the data and page but seems like there's something wrong with them.
I don't think this is a bug I think it part of the design.
I actually just discovered the reason this is happening at least for me, it doesn't actually sum the values in the table, the grand total summary of a table is a sum of whatever the metric being used is not the actual rows shown in the chart. so if you have a dimension (like age / gender) where there is data thresholding applied internally by google but are using a metric such as users you will see the grand total from the metric value without the thresholding applied from the dimension.
Proof below
You can see the grand total for column 2 is not 953.6 its 453.6 and if i look at a non threshold dimension (country)
you can see where the 953.6 comes from since the data source supplied to the table uses 80% of all users 1192 * .8 give me 953.6 which is what the grand total is displaying. Conclusion, the only way this number could be possible is if, when using a threshold dimension for a table with metric there will be a discrepancy since the grand total value is not coming from the table values but rather from metric source data, which will not have the tables dimension applied for some odd reason.

exporting only rows from sql in phpmyadmin, only where a certain column has Boolean of 0

"meta/background about the use of code and person using it"
1.site built by professional that left company,
2.I am inexperienced but trying/ want to learn,
3.Customer support site for service reps,
................................................
What im trying to do exactly per stackoverflows parameters.
We have a drop down box listing issues that the customer had in a column labeled "issue_type". I can export via csv entire table load onto excel then give to boss for overall review of what the issues were. However data base has a "hide" column. Its function is that when the row is updated the record is kept but the same "job or call" has only one viewable report on site (the most recently updated one). Hide is a boolean. In conclusion I want to export rows that only has the "hide" column Boolean status at 0, AND to only export the columns "customer", and "issue_type". I can seem to only do one or the other. and have researched a minimum of 4 hours to find answer myself and cannot find a syntax to do both at the same time with phpmyadmin.
I dont want an enormous data that is mostly useless but for issue type and customer but i will have to manually delete all the rows with hide = 1?
Thanks anyone 1st attempt question sorry if not correct for stackflow.
SELECT Customer,Issue_type FROM tickets where hide =0;
Elaborating on what is above for anyone that may be looking for a similar answer, SQL supports the "where" clause of which you can when properly syntaxed select many of your columns and their associated strings, booleans, and numbers to = what your looking for. Wildcards I found later for other uses work as well.
Sorry about the self answer but hopefully someone finds this usefull

hiding unnecessary fields in Access Report

At my workplace there is a "Daily Feedback" database where details are entered of any errors made by Customer Service Officers (CSOs), such as who made the mistake, when, and what service it was for. this is so we can gather data which would show areas where CSO's are repeatedly making mistakes so we can feed this back to them and train them in those areas if need be.
i currently have a report where an CSOs name is entered along with a date range and it produces the report showing the number of errors made for each service for that date range.
the code used is -
=Sum(IIf([Service]="Housing",1,0))
=Sum(IIf([Service]="Environmental Health",1,0))
etc etc for each Service.
The problem i have is that not every CSO does EVERY service and so there are usually a few results showing as "0". and i cannot be sure if thats because they dont do the service or if they are just very good at that service.
Being absolutely useless at SQL (or any other possible way of fixing this) i cannot figure out how to HIDE the entries that produce the zero value.
any help here would be greatly appreciated!
Assuming you have a table with the fields CSO, Service, FeedbackComments you could modify the report record source to
SELECT [CSO], [Service], Count([FeedbackComments])
FROM [FeedbackTable]
GROUP BY [CSO], [Service];
Then services which have no records will not appear on the report.
I don't understand exactly what you want. But I want to mention you can use the COUNT() function along with SUM(). A count >0 will reveal if 0 means '0' instances or '0' errors.