I am trying to use the [OpenMeteo API][1] to download CSVs via script. Using their API Builder, and their click-able site, I am able to do it for a single site just fine.
In trying to write a function to accomplish this in a loop via the requests library, I am stumped. I CANNOT get the json data OUT of the type requests.model.Response and into a JSON or even better CSV/ Pandas format.
Cannot parse the requests module response to anything. It appears to succeed ("200" response), and have data, but I cannot get it out of the response format!
latitude= 60.358
longitude= -148.939
request_txt=r"https://archive-api.open-meteo.com/v1/archive?latitude="+ str(latitude)+"&longitude="+ str(longitude)+ "&start_date=1959-01-01&end_date=2023-02-10&models=era5_land&daily=temperature_2m_max,temperature_2m_min,temperature_2m_mean,shortwave_radiation_sum,precipitation_sum,rain_sum,snowfall_sum,et0_fao_evapotranspiration&timezone=America%2FAnchorage&windspeed_unit=ms"
r=requests.get(request_txt).json()
[1]: https://open-meteo.com/en/docs/historical-weather-api
The url needs format=csv added as a parameter. StringIO is used to download as an object into memory.
from io import StringIO
import pandas as pd
import requests
latitude = 60.358
longitude = -148.939
url = ("https://archive-api.open-meteo.com/v1/archive?"
f"latitude={latitude}&"
f"longitude={longitude}&"
"start_date=1959-01-01&"
"end_date=2023-02-10&"
"models=era5_land&"
"daily=temperature_2m_max,temperature_2m_min,temperature_2m_mean,"
"shortwave_radiation_sum,precipitation_sum,rain_sum,snowfall_sum,"
"et0_fao_evapotranspiration&"
"timezone=America%2FAnchorage&"
"windspeed_unit=ms&"
"format=csv")
with requests.Session() as request:
response = request.get(url, timeout=30)
if response.status_code != 200:
print(response.raise_for_status())
df = pd.read_csv(StringIO(response.text), sep=",", skiprows=2)
print(df)
Related
I'm trying to load a CSV dataset directly from an external file system, but I'm getting a 401 Unauthorized response whenever I call sparkContext.addFile(). Is there a way to add authorization headers to the request before adding the file? Or a better way to load a csv file as a dataframe?
This is what I'm trying now and it throws an exception when I make the addFile() call.
import org.apache.spark.SparkFiles
spark.sparkContext.addFile(urlPath)
val df = spark.read
.option("header", true)
.csv("file://"+SparkFiles.get(urlPath))
I am running the following code to retrieve account info by hashtag from the unofficial TikTok Api.
API Repository - https://github.com/davidteather/TikTok-Api
Class Definition - https://dteather.com/TikTok-Api/docs/TikTokApi/tiktok.html#TikTokApi.hashtag
However it seems the maximum number of responses I can get on any hashtag is 500 or so.
Is there a way I can request more? Say...10K lines of account info?
from TikTokApi import TikTokApi
import pandas as pd
hashtag = "ugc"
count = 50000
with TikTokApi() as api:
tag = api.hashtag(name=hashtag)
print(tag.info())
lst = []
for video in tag.videos(count=count):
lst.append(video.author.as_dict)
df = pd.DataFrame(lst)
print(df)
The above code for hashtag "ugc" produces only 482 results. Whereas I know there is significantly more results available from TikTok.
It’s ratelimit, TikTok will block you after a certain amount of requests.
Add proxy support like this:
proxies = open('proxies.txt', 'r').read().splitlines()
proxy = random.choice(proxies)
proxies = {'http': f'http://{proxy}', 'https': f'https://{proxy}'}
Could be more precise but it's a way to rotate them
So I can pull spot data from Kraken with:
import requests
url = 'https://api.kraken.com/0/public/OHLC?pair=XBTUSD'
resp = requests.get(url)
resp.json()
But when I try to pull Futures data I always get
{'error': ['EQuery:Unknown asset pair']}
What I've done currently: I take the url from this website, https://demo-futures.kraken.com/futures/FI_XBTUSD_220930, which is "FI_BTCUSD_220930":
url = 'https://api.kraken.com/0/public/OHLC?pair=FI_BTCUSD_220930'
resp = requests.get(url)
resp.json()
I've considered that it might be due to being unable to pull OHLC data for futures. Even when I try a more simple request like just to get info about the ticker I get the same error.
I've looked in the documetation for seperate rules for futures but can't find any reference to what to do differently for futures?
I need to make a custom controller on Odoo for getting information from the particular task. And I can able to produce the result also. But now I'm facing an issue.
The client needs to retrieve the information with a particular field.
For example,
The client needs to retrieve the information with the tracking number and the data must be JSON format also. If the tracking number is 15556456356, the url should be www.customurl.com/dataset/15556456356
The route of that URL should be #http.route('/dataset/<string:tracking_number>', type='http or json', auth="user or public"), basically the method should be like this:
import json
from odoo import http
from odoo.http import Response, request
class tracking(http.Controller):
# if user must be authenticated use auth="user"
#http.route('/dataset/<string:tracking_number>', type='http', auth="public")
def tracking(self, tracking_number): # use the same variable name
result = # compute the result with the given tracking_number and the result should be a dict to pass it json.dumps
return Response(json.dumps(result), content_type='application/json;charset=utf-8',status=200)
This method accept http request and return a json response, if the client is sending a json requests you should change type='json'. don't forget to import the file in the __init___.py.
Lets take an example let say that I want to return some information about a sale.order by a giving ID in the URL:
import json
from odoo import http
from odoo.http import Response, request
class Tracking(http.Controller):
#http.route('/dataset/<int:sale_id>', type='http', auth="public")
def tracking(self, sale_id):
# get the information using the SUPER USER
result = request.env['sale.order'].sudo().browse([sale_id]).read(['name', 'date_order'])
return Response(json.dumps(result), content_type='application/json;charset=utf-8',status=200)
So when I enter this URL using my Browser: http://localhost:8069/dataset/1:
I am trying to pull stock data from yahoo and put it into a pandas dataframe. If I put the api call into a web browswer I return results ... so I think the API still works but I don't know how to get it into pandas.
Thx
input
import pandas as pd
api = 'https://query1.finance.yahoo.com/v8/finance/chart/AAPL?interval=5m'
df = pd.read_csv(api, skiprows=8, header=None)
I think yahoo finance already shutdown api service to prevent data abuse ,a valid token is required to initiate data transfer,you need to exlplore other services provider like alpha
vantage as alternative
https://www.alphavantage.co/