using pdblp with currency option - bloomberg

I want to use PDBLP to return historic market cap for a stock, which I can do with the code below:
df = con.bdh('BHP AU Equity', 'CUR_MKT_CAP','20180129', '20180129')
Does anyone know how to include a currency option so that the returned data is in USD? I have tried below using notes from the GitHub site.
df = con.bdh('BHP AU Equity', 'CUR_MKT_CAP','20180129', '20180129',elms=[('Currency','USD')])
But this is not working.

Related

Pricing Currency Override in bloomberg api wrapper blp

I'm using the BLP Package as a wrapper around Bloomberg's API.
In the excel bloomberg api, I'm pulling a ticker called FUND_TOTAL_ASSETS. That value can be in any of several currencies. But by adding the "FX=USD" parameter to my BDH() query, it normalizes all the values to USD. Eg. ticker '1671 JP Equity' for 'FUND_TOTAL_ASSETS' field becomes 188.3905USD instead of 25915. =#BDH(E$4,E$6,$B1,$B2,"Dir=V","CDR=5D","Days=A","FX=USD","Dts=H","cols=1;rows=22")
I want to pull up the same data with the same conversion-to-usd operation using blp wrapper. I thought that I can do this via an override parameter.
From the bloomberg api guide I see
so I thought something like the following would work:
xf = bquery.bdh(securities=search, fields=['FUND_TOTAL_ASSETS',],
start_date=dt.date(2022,10,1).strftime("%Y%m%d"),
end_date=dt.datetime.now().date().strftime("%Y%m%d"),
overrides= [('currency','USD')])
It returns the data, but the value remains in the native currency.
I tried the same code as the excel api (FX,USD) but that fails with a bloomberg INVALID_OVERRIDE_FIELD.
The underlying Bloomberg DAPI differentiates between Overrides and Options (but even then there is some confusing overlap).
Typically an Override is applied to a Bloomberg Field. Eg the field FUND_CRNCY_ADJ_TOTAL_ASSETS for 1671 JP Equity has an override of FUND_TOTAL_ASSETS_CRNCY, where you can set the currency. You can see this by typing 1671 JP Equity FLDS in the Terminal and searching the available fields. The overrides are specific to the field.
In addition the HistoricalDataRequest schema also has various options which can be specified. These are elements of the request, in the same way as startDate and endDate. These are the options in the reference to which the OP links. The options apply to all the fields and usually govern how the data is returned (eg date periodicity).
With the blp package, options are supplied as a dictionary of key:value pairs.
This snippet demonstrates how this can be done to specify the currency of the returned data:
from blp import blp
import datetime as dt
bquery = blp.BlpQuery().start()
dtEnd = dt.datetime.now().date()
dtStart = dtEnd - dt.timedelta(days=7)
xf = bquery.bdh(securities='1671 JP Equity', fields=['FUND_TOTAL_ASSETS',],
start_date=dtStart.strftime("%Y%m%d"),
end_date=dtEnd.strftime("%Y%m%d"))
print(xf)
xfUSD = bquery.bdh(securities='1671 JP Equity', fields=['FUND_TOTAL_ASSETS',],
start_date=dtStart.strftime("%Y%m%d"),
end_date=dtEnd.strftime("%Y%m%d"),
options={'currency':'USD'})
print(xfUSD)
The output should reflect the different currency used.

Most Recent Coins added to CoinGecko

I am trying to get the most recently added coin added to Coingecko. Any ideas which API to use or how to achieve this.
Ideally I am trying to get this in near realtime.
Thanks
I don't think Coingecko provides a direct API to retrieve recently added coins. For your development purposes, you can definitely try web scraping.
https://www.coingecko.com/en/coins/recently_added
You can read the name of the latest coins directly from https://www.coingecko.com/en/coins/recently_added and then use CoinGecko API to search info by name.
In Python:
import requests
# get all coins listed on CoinGecko
coins = requests.get('https://api.coingecko.com/api/v3/coins/list').json()
# extract the name of the latest coins
r = requests.get('https://www.coingecko.com/it/monete/recently_added')
for line in r.text.splitlines():
if '<td class="py-0 coin-name" data-sort=' in line:
name = line[len('<td class="py-0 coin-name" data-sort=')+1:-2]
print(name)
# then search coin in the list retrieved above
for coin in coins:
if coin['name'] == name:
r = requests.get('https://api.coingecko.com/api/v3/coins/'+coin['id'])
print(r.json())

Pandas, importing JSON-like file using read_csv

I would like to import data from .txt to dataframe. I can not import it using classical pd.read_csv, while using different types of sep it throws me errors. Data I want to import Cell_Phones_&_Accessories.txt.gz is in a format.
product/productId: B000JVER7W
product/title: Mobile Action MA730 Handset Manager - Bluetooth Data Suite
product/price: unknown
review/userId: A1RXYH9ROBAKEZ
review/profileName: A. Igoe
review/helpfulness: 0/0
review/score: 1.0
review/time: 1233360000
review/summary: Don't buy!
review/text: First of all, the company took my money and sent me an email telling me the product was shipped. A week and a half later I received another email telling me that they are sorry, but they don't actually have any of these items, and if I received an email telling me it has shipped, it was a mistake.When I finally got my money back, I went through another company to buy the product and it won't work with my phone, even though it depicts that it will. I have sent numerous emails to the company - I can't actually find a phone number on their website - and I still have not gotten any kind of response. What kind of customer service is that? No one will help me with this problem. My advice - don't waste your money!
product/productId: B000JVER7W
product/title: Mobile Action MA730 Handset Manager - Bluetooth Data Suite
product/price: unknown
....
You can use jen for separator, then split by first : and pivot:
df = pd.read_csv('Cell_Phones_&_Accessories.txt', sep='¥', names=['data'], engine='python')
df1 = df.pop('data').str.split(':', n=1, expand=True)
df1.columns = ['a','b']
df1 = df1.assign(c=(df1['a'] == 'product/productId').cumsum())
df1 = df1.pivot('c','a','b')
Python solution with defaultdict and DataFrame constructor for improve performance:
from collections import defaultdict
data = defaultdict(list)
with open("Cell_Phones_&_Accessories.txt") as f:
for line in f.readlines():
if len(line) > 1:
key, value = line.strip().split(':', 1)
data[key].append(value)
df = pd.DataFrame(data)

Coinbase work with ethereum in stead of bitcoin

I am trying to create an ether buy and sell bot on coinbase. They have a truly wonderfull description on their developer page. There is one thing I am missing.
Somehow all functions automatically refer to bitcoin and not to ether. I assume there is a setting to change that in the code but I am not finding or succeeding in this. All examples on their developer page are with bitcoin. For example:
buy_price = client.get_buy_price(currency = 'EUR')
This returns: amount, base and currency. So I noticed I can change the currency. Now I tried to change the base with
buy_price = client.get_buy_price(currency = 'EUR', base = 'ETH')
It still returns BTC (bitcoin) as base.
Hope someone can help me out here.
Try this:
buy_price = client.get_buy_price(currency_pair = 'ETH-USD')
From https://developers.coinbase.com/api/v2#get-exchange-rates
EDIT: the Python API seems not to work. But the raw GET request works, so here's a replacement function for you:
import urllib.request
import json
def myGetBuyPrice(crypto, fiat):
ret = (urllib.request.urlopen("https://api.coinbase.com/v2/prices/"+crypto+"-"+fiat+"/buy").read()).decode("utf-8")
return json.loads(ret)["data"]
print myGetBuyPrice("ETH", "USD")

Is there a way to speed up this webscraping iteration? Pandas

So I'm collecting data on a list of stocks and putting all that info into a dataframe. The list has about 700 stocks.
import pandas as pd
stock =['adma','aapl','fb'] # list has about 700 stocks which I extracted from a pickled dataframe that was storing the info.
#The site I'm visiting is below with the name of the stock added to the end of the end of the link
##http://finviz.com/quote.ashx?t=adma
##http://finviz.com/quote.ashx?t=aapl
I'm just extracting one portion of that site, evident by [-2] in the code below
df2 = pd.DataFrame()
for i in stock:
df = pd.read_html('http://finviz.com/quote.ashx?t={}'.format(i), header =0)[-2].set_index('SEC Form 4')
df['Stock'] = i.upper() # creating a column which has the name of the stock, so I can differentiate between stocks
df2 = df2.append(df)
It feels like I'm doing a few seconds per iteration and I have around 700 to go through at the moment. It's not terribly slow, but I was just curious if there is a more efficient method. Thanks.
Your current code is blocking, you don't proceed with retrieving the information from the next url until you are done with the current. Instead, you can switch to, for example, Scrapy which is based on twisted and working asynchronously processing multiple pages at the same time.