I am trying to download the NEER data from this target page.
Here is a good example:Most efficient way of converting RESTful output to dataframe
Here is the code to get the OECD data.
import pandasdmx as sdmx
df = sdmx.Request('OECD').data(
resource_id='MEI_FIN',
key='IR3TIB.GBR+USA.M',
params={'startTime': '2008-06', 'dimensionAtObservation': 'TimeDimension'},
).write()
I just wonder where to find and how to search the 'resource_id', 'params' and 'key' to get the EUR NEER data in my target ECB page.
Thank you.
I solved it.
Find the data in https://sdw.ecb.europa.eu/. If not, search other database.
Select and filter until I get the data series.
click the data series, it will show the data in a window. The url of this window gives me the key. https://sdw.ecb.europa.eu/quickview.do?SERIES_KEY=120.EXR.D.E5.EUR.EN00.A
Then use the key to download the data.
df = sdmx.Request('ECB').data(
resource_id='EXR',
key='D.E5.EUR.EN00.A',
params=dict(startPeriod='2019-01', endPeriod='2019-06'),
).write()
The remaining question is that if I can search directly using the code instead of visiting the website.
Related
I want to diagonse autism and one of the best autism diagnosis datasets is ABIDE (Autism Brain Imaging Data Exchange). My final goal in this part is to have a connectivity matrix which I can use for the rest of my research. At this moment I am trying to download ABIDE dataset using Nilearn library. I use the code below as mentioned in Nilearn's documentry, but unfortunately I get an empty list from dataset.func_preproc. I don't know the reason, because it works fine for other datasets of Nilearn which I tested.
dataset = nilearn.datasets.fetch_abide_pcp(derivatives=['func_preproc', 'func_mean'])
print(dataset['func_preproc'])
and the output is:
[]
as you see , dataset['func_preproc'] is an empty list.
Does anyone have any idea about this?
I am trying to create a column that imports the analyst price target from TipRanks website.
I uploaded two images:
Image 1: you can see the cell that I want to import.
Image 2: you can see my function that doesn't work.
What should I change in order to get this live info?
Thanks.
The site you are checking is actually "javascript" generated thus import functions won't properly work on them.
To check, just try to import the whole site data. If it returns a javascript function, then it is javascript generated.
Sample (tipranks.com)
What you can do is actually try to find other sites that provide the same data.
I did find one with the same data you are looking for, 50.38 for csiq. Link is "https://www.marketwatch.com/investing/stock/csiq/analystestimates". And since data is shown as table, it would be easier to import using importhtml.
Cell formula is:
=INDEX(IMPORTHTML("https://www.marketwatch.com/investing/stock/csiq/analystestimates", "table", 5), 2, 2)
Sample output:
The table is the fifth one in the DOM, and INDEX(table, 2, 2) means getting the 2nd row 2nd column of the table.
If the site is no good for you, you can try finding other sites that would suit your needs. And then use either importhtml or importxml depending on the site structure.
When you inspect the network when the website is loading you will see that the prices come when calling the forecast endpoint https://www.tipranks.com/stocks/tsla/forecast. This in turn returns an html response which is probably generated with Javascript on the client because they use React on the frontend, but you can still see the preview in the Network tab of the browser dev tools.
You can then copy the preview in VSCode and prettify it, to try and pin point the span holding the price. Of course it won't be exact science, because the html tags are generated with some media queries, but you will get close enough to some extent.
After you get the xml path but you get an empty error, you can delete some tags until you get some text. Use search in google sheets to search for highest price label, and than continue adding tags until you get the desired value.
Here is what I managed to get:
Lowest price target:
=importxml("https://www.tipranks.com/stocks/snow/forecast", "/html/body/div[1]/div[1]/div[4]/div[1]/div[2]/div[1]/div[4]/div[2]/div[2]/div[4]/div[1]/div[1]/div[5]/span[2]")
Average price target:
=importxml("https://www.tipranks.com/stocks/snow/forecast", "/html/body/div[1]/div[1]/div[4]/div[1]/div[2]/div[1]/div[4]/div[2]/div[2]/div[4]/div[1]/div[1]/div[3]/span[2]")
Highest price target:
=importxml("https://www.tipranks.com/stocks/snow/forecast", "/html/body/div[1]/div[1]/div[4]/div[1]/div[2]/div[1]/div[4]/div[2]/div[2]/div[4]/div[1]/div[1]/div[1]/span[2]")
In time these methods might change depending on their development process, but you could use the above steps to update the script.
P.S. I wasn't satisfied with the marketwatch analyst price targets. I think the wisdom of the crowd is better on tipranks.
Try this one. Works perfectly fine on my personal Stock Portfolio on Google Sheets:
Lowest Price Target:
=importxml(CONCATENATE("https://www.tipranks.com/stocks/", A1,"/forecast"), "//*[#class='colorpurple-dark ml3 mobile_fontSize7 laptop_ml0']")
Average Price Target:
=importxml(CONCATENATE("https://www.tipranks.com/stocks/", A4,"/forecast"), "//*[#class='colorgray-1 ml3 mobile_fontSize7 laptop_ml0']")
Highest Price Target:
=importxml(CONCATENATE("https://www.tipranks.com/stocks/", A4,"/forecast"), "//*[#class='colorpale ml3 mobile_fontSize7 laptop_ml0']")
Using Google Sheets, I am trying to retrieve text passages from the Perseus Scaife Library, which has a working API.
When I query for the document node (=importxml("https://scaife-cts.perseus.org/api/cts?request=GetPassage&urn=urn:cts:greekLit:tlg0527.tlg001.opp-grc2:1.1","/")) I get all the data, including the URNs etc. However, any other xpath_query gives an error.
I know that Google Sheets can access the data, but I would like to be able to select only one node (//p).
You want to retrieve the text in passage. If my understanding is correct, how about this answer?
=importxml(A1, "//*[local-name()='passage']")
Result :
Note :
https://scaife-cts.perseus.org/api/cts?request=GetPassage&urn=urn:cts:greekLit:tlg0527.tlg001.opp-grc2:1.1 is converted by URL encode and put to "A1".
Converted URL is https://scaife-cts.perseus.org/api/cts?request=GetPassage&urn=urn%3acts%3agreekLit%3atlg0527%2etlg001%2eopp%2dgrc2%3a1%2e1.
Reference :
local-name
If this was not what you want, I'm sorry.
I need to store the data presented in the graphs on the Google Ngram website. For example, I want to store the occurences of "it's" as a percentage from 1800-2008, as presented in the following link: https://books.google.com/ngrams/graph?content=it%27s&year_start=1800&year_end=2008&corpus=0&smoothing=3&share=&direct_url=t1%3B%2Cit%27s%3B%2Cc0.
The data I want is the data you're able to scroll over on the graph. How can I extract this for about 140 different terms (e.g. "it's", "they're", "she's", etc.)?
econpy wrote a nice little module in Python that you can use through a command-line interface.
For your "it's" example, you would need to type this command in a terminal / windows console:
python getngrams.py it's -startYear=1800 -endYear=2008 -corpus=eng_2009 -smoothing=3
This will automatically save the query result in a CSV file named after your query parameters.
econpy's package, in #HugoMailhot's answer, no longer works (2021) and seems not maintained.
Here's a updated version, with some improvements for easier integration into Python code:
https://gitlab.com/cpbl/google-ngrams
You can call this from the command line (as in econpy's) to create a CSV file, e.g.
getngrams.py it's -startYear=1800 -endYear=2008 -corpus=eng_2009 -smoothing=3
or call it from python to get (and plot) data directly in python, e.g.:
from getngrams import ngrams
df = ngrams('bells and whistles -startYear=1900 -endYear=2018 -smoothing=2')
df.plot()
The xkcd functionality is still there too.
(Issues / bug fix pull requests /etc welcome there)
I have been trying to export data to a google bigquery dataset from datasift, but except for 4 empty rows, no other relevant data has been pushed.
I followed instruction from this link: http://dev.datasift.com/docs/push/connectors/bigquery. Not sure if it's the csdl code that I used the cause.
For example I configured a stream using:
wikipedia.title contains "Audi".
The live preview has no output. Also, the only data sources that I've set as active are Interaction and Wikipedia.
Please let me know what may be the reason for this. At the end of every stream recording I don't see any changes, expect the creation of the table mentioned in the destination with 4 empty rows(some row have null values, and interaction.type is ds_verify).
Thank you!