zipline: How to find the sid of symbol such as "AAPL" - assets

I want to use the data in the zipline to do some research. (not in the backtesting )
So how can I find the sid from symbol in a just "helloworld.py"?
zipline.assets.asset, it seems that the asset is implmented in c language.
Thanks
Arthur

Related

How to keep translations separated where the same word is used in English but a different one in other languages?

Imagine I have a report, a letter actually, which I need to translate to several languages. I have created a greeting field in the form which is filled programatically by an onchange event method.
if self.partner_id.gender == 'female':
self.letter_greeting = _('Dear %s %s,') % ( # the translation should be "Estimada"
self.repr_recipient_id.title.shorcut, surname
)
elif self.partner_id.gender == 'male':
self.letter_greeting = _('Dear %s %s,') % ( # translation "Estimado"
self.repr_recipient_id.title.shorcut, surname
)
else:
self.letter_greeting = _('Dear %s %s,') % ( # translation: "Estimado/a"
self.partner_id.title.shorcut, surname
)
In that case the word Dear should be translated to different Spanish translations depending on which option is used, this is because we use different termination depending on the gender. Exporting the po file I found that all the options are altogether, that make sense because almost all the cases the translations will be the same, but not in this case:
#. module: custom_module
#: code:addons/custom_module/models/sale_order.py:334
#: code:addons/custom_module/models/sale_order.py:338
#: code:addons/custom_module/models/sale_order.py:342
#, python-format
msgid "Dear %s %s,"
msgstr "Dear %s %s,"
Solutions I can apply directly
Put all the terms in different entries to avoid the same translation manually every time I need to update the po file. This can be cumbersome if you have many different words with that problem. If I do it and I open the file with poedit, this error appears: duplicate message definition
Put all the possible combinations with slashes, this is done y some other parts of Odoo. For both gender would be:
#. module: stock
#: model:res.company,msg:stock.res_company
msgid "Dear"
msgstr "Estimado/a"
This is just an example. I can think of many words that look the same in English, but they use different spelling or meanings in other languages depending on the context.
Possible best solutions
I don't know if Odoo know anything aboutu the context of a word to know if it was already translated or not. Adding a context manually could solve the problem, at least for words with different meanings.
The nicest solution would be to have a parameter to the translation module to make sure that the word is exported as an isolated entry for that especific translation.
Do you think that I am giving to it too much importance haha? Do you know if there is any better solution? Why is poedit not taking into account that problem at all?
I propose an extension of models res.partner.title and res.partner.
res.partner.title should get a translateable field for saving salutation prefixes like 'Dear' or 'Sehr geehrter' (German). Maybe it's worth to get something about genders, too, but i won't get into detail here about that.
You probably want to show the configuring user an example like "Dear Mr. Name" or something like that. A computed field should work.
On res.partner you should just implement either a computed field or just a method to get a full salutation for a partner record.
To some degree this is a linguistics problem. I believe the best solution would be to use a different "Source Language", one made up of keys, and then have English as another Translation. The word "Dear" in English does not have a gender context (and typically, much of English doesn't), while the word "Estimado" in Spanish does. The translation from that Spanish word to English is more appropriately "Masculine Dear." Therefore, using keys as your source language, you would have this:
SourceText (EnglishDescription) -> Translation (English) -> Translation (Spanish)
DearMasculine -> Dear -> Estimado
DearFeminine -> Dear -> Estimada
DearNuetral -> Dear -> Estimado/a

How to fix 'relation "us_lex" does not exist' using standardize_address?

I'm trying to parse a column of ~175,000 street names. Some of them are just one word (Jamaicaway), and some are multiple words (St. Edwards Pl). I want just the street body (Jamaicaway and St. Edwards, respectively).
I found the address_standardizer package and installed it, but when I run the example below I get the error relation "us_lex" does not exist.
SELECT house_num, name, suftype, city, country, state, unit
FROM standardize_address('us_lex', 'us_gaz', 'us_rules', 'One
Devonshire Place, PH 301, Boston, MA 02109');
I'd expect to get back just "Devonshire," but I'm getting the error instead. There doesn't seem to be much about this on the package page. Any insight?
You will also have to run:
CREATE EXTENSION address_standardizer_data_us;
Which creates the us_lex and other data tables

DAX / Xetra on alphavantage

I am very very happy with Alphavantage.
BUT I can't find the german stocks (Xetra)
I have tried:
https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=xtr:lin&apikey=MYKEY
(But this works https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=NYSE:DIN&apikey=MYKEY)
https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=Lin.be&apikey=MYKEY
(But this works: https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=Novo-b.CO&apikey=MYKEY)
So my question is - has anyone had any luck getting german stocks on Alphavanta (or another free service. Realtime is not crucial, but obviously a plus).
I use the "Search Endpoint" function to find german stocks on alphavantage.
Let's say you look for "BASF" you could query:
https://www.alphavantage.co/query?function=SYMBOL_SEARCH&keywords=BASF&apikey=[your API key]&datatype=csv
You get a list with possible matches:
symbol,name,type,region,marketOpen,marketClose,timezone,currency,matchScore
BASFY,BASF SE,Equity,United States,09:30,16:00,UTC-05,USD,0.8889
BFFAF,BASF SE,Equity,United States,09:30,16:00,UTC-05,USD,0.8889
BASFX,BMO Short Tax-Free Fund Class A,Mutual Fund,United States,09:30,16:00,UTC- 05,USD,0.8889
BAS.DEX,BASF SE,Equity,XETRA,08:00,20:00,UTC+02,EUR,0.7273
BAS.FRK,BASF SE,Equity,Frankfurt,08:00,20:00,UTC+02,EUR,0.7273
BASA.DEX,BASF SE,Equity,XETRA,08:00,20:00,UTC+02,EUR,0.7273
BAS.BER,BASF SE NA O.N.,Equity,Berlin,08:00,20:00,UTC+02,EUR,0.7273
BASF.NSE,BASF India Limited,Equity,India/NSE,09:15,15:30,UTC+5.5,INR,0.6000
See documentation: https://www.alphavantage.co/documentation/
It seems to work with the yahoo symbols on alphavantage, at least for a few stocks (I did not check all). BASF for example works with:
https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=BASF.TI&apikey=MYKEY
The alphavantage symbols for German securities consist of the Xetra symbol + .DE. For example EUNL.DE (for iShares MSCI World Core ETF). You can find a list of all Xetra stocks here.

Am I training my wit.ai bot correctly?

I'm trying to train my Wit.ai bot in order to recognize the first name of someone. I'm not very sure if I well understand how the NLP works so I'll give you an example.
I defined a lot of expressions like "My name is XXXX", "Everybody calls me XXXX"
In the "Understanding" table I added an entity named "contact_name" and I add almost 50 keywords like "Michel, John, Mary...".
I put the trait as "free-text" and "keywords".
I'm not sure if this process is correctly. So, I ask you:
does it matter the context like "My name is..." for the NLP? I mean...will it help the bot to predict that after this expression probably a fist name will come on?
is that right to add like 50 values to an entity or it's completly wrong?
what do you suggest as a training process in order to get the first name of someone?
You have done it right by keeping the entity's search strategy as "free-text" and "Keywords". But Adding keywords examples to the entity doesn't make any sense because a person's name is not a keyword.
So, I would recommend a training strategy which is as follows:
Create various templates of the message like, "My name is XYZ", "I am XYZ", "This is XYZ" etc. (all possible introduction messages you could think of)
Remove all keywords and expressions for the entity you created and add these two keywords:
"a b c d e f g h i j k l m n o p q r s t u v w x y z"
"XYZ" (can give any name but maintain this name same for validating the templates)
In the 'Understanding' tab enter the messages and extract the name into the entity ("contact_name" in your case) and validate them
Similarly, validate all message templates keeping the name as "XYZ"
After you have done this for all templates your bot will be able to recognise any name in a given template of the message
The logic behind this is your entity is a free-text and keyword which means it first tries to match the keyword if not matched it tries to find the word in the same position of the templates. Keeping the name same for validations helps to train the bot with the templates and learn the position where the name will be usually found.
Hope this works. I have tried this and worked for me. I am not sure how bot trains in background. I recommend you to start a new app and do this exercise.
Comment if there is any problem.
wit.ai has a pre-trained entity extraction method called wit/contact, which
Captures free text that's either the name or a clear reference to a
person, like "Paul", "Paul Smith", "my husband", "the dentist".
It works good even without any training data.
To read about the method refer to duckling.

VK API - return city names in latin

I am using VK API to get list of cities in specific country. Does anyone know how to show Russian cities (which are in Cyrilic) in Latin?
Example of JSON response:
http://api.vk.com/method/places.getCities?lang=en&country_id=1&count=1000&need_all=1
I am trying to check if city exists, but if someone enter city name in latin, in some cases city check works, for example Vladivostok is Владивосток, but Moscow is Москва.
I found one solution that works for me: firstly get all cities ids by places.getCities (database.getCities) method and then use database.getCitiesById providing saved cities ids like following:
database.getCitiesById?lang=en&city_ids=1,2,123
In request to this API method you can specify needed language by parameter "lang" (e.g. lang=en) and up to 1k comma-separated cities ids (e.g. city_ids=1,2,123,...).
database.getCitiesById official documentation
As an alternative, you can get the static list of Russian cities here. This JSON array contains the slug name of each city in English. It's a free list, so anyone can edit and add correct information for any localities.