So I can pull spot data from Kraken with:
import requests
url = 'https://api.kraken.com/0/public/OHLC?pair=XBTUSD'
resp = requests.get(url)
resp.json()
But when I try to pull Futures data I always get
{'error': ['EQuery:Unknown asset pair']}
What I've done currently: I take the url from this website, https://demo-futures.kraken.com/futures/FI_XBTUSD_220930, which is "FI_BTCUSD_220930":
url = 'https://api.kraken.com/0/public/OHLC?pair=FI_BTCUSD_220930'
resp = requests.get(url)
resp.json()
I've considered that it might be due to being unable to pull OHLC data for futures. Even when I try a more simple request like just to get info about the ticker I get the same error.
I've looked in the documetation for seperate rules for futures but can't find any reference to what to do differently for futures?
Related
I used the below link.
https://api.roblox.com/users/**$UserID**/onlinestatus
for example:
https://api.roblox.com/users/543226965/onlinestatus
I have been receiving an error message from last time. The error message is given below.
{"errors":[{"code":404,"message":"NotFound"}]}
I heard the roblox API have been changed, but I can not find the right solutions, so I will be grateful for any answer.
Thanks.
It appears that you're trying to access the user's online status. As of right now, you can access this information by querying https://api.roblox.com/users/<user_id>, which returns a JSON object. Looking up IsOnline in the dictionary should return what you're trying to get
Here's an example I coded in Python:
import requests
res = requests.get(url='https://api.roblox.com/users/543226965')
res = res.json()
print(res['IsOnline'])
>>> True/False
I am trying to use the [OpenMeteo API][1] to download CSVs via script. Using their API Builder, and their click-able site, I am able to do it for a single site just fine.
In trying to write a function to accomplish this in a loop via the requests library, I am stumped. I CANNOT get the json data OUT of the type requests.model.Response and into a JSON or even better CSV/ Pandas format.
Cannot parse the requests module response to anything. It appears to succeed ("200" response), and have data, but I cannot get it out of the response format!
latitude= 60.358
longitude= -148.939
request_txt=r"https://archive-api.open-meteo.com/v1/archive?latitude="+ str(latitude)+"&longitude="+ str(longitude)+ "&start_date=1959-01-01&end_date=2023-02-10&models=era5_land&daily=temperature_2m_max,temperature_2m_min,temperature_2m_mean,shortwave_radiation_sum,precipitation_sum,rain_sum,snowfall_sum,et0_fao_evapotranspiration&timezone=America%2FAnchorage&windspeed_unit=ms"
r=requests.get(request_txt).json()
[1]: https://open-meteo.com/en/docs/historical-weather-api
The url needs format=csv added as a parameter. StringIO is used to download as an object into memory.
from io import StringIO
import pandas as pd
import requests
latitude = 60.358
longitude = -148.939
url = ("https://archive-api.open-meteo.com/v1/archive?"
f"latitude={latitude}&"
f"longitude={longitude}&"
"start_date=1959-01-01&"
"end_date=2023-02-10&"
"models=era5_land&"
"daily=temperature_2m_max,temperature_2m_min,temperature_2m_mean,"
"shortwave_radiation_sum,precipitation_sum,rain_sum,snowfall_sum,"
"et0_fao_evapotranspiration&"
"timezone=America%2FAnchorage&"
"windspeed_unit=ms&"
"format=csv")
with requests.Session() as request:
response = request.get(url, timeout=30)
if response.status_code != 200:
print(response.raise_for_status())
df = pd.read_csv(StringIO(response.text), sep=",", skiprows=2)
print(df)
I am running the following code to retrieve account info by hashtag from the unofficial TikTok Api.
API Repository - https://github.com/davidteather/TikTok-Api
Class Definition - https://dteather.com/TikTok-Api/docs/TikTokApi/tiktok.html#TikTokApi.hashtag
However it seems the maximum number of responses I can get on any hashtag is 500 or so.
Is there a way I can request more? Say...10K lines of account info?
from TikTokApi import TikTokApi
import pandas as pd
hashtag = "ugc"
count = 50000
with TikTokApi() as api:
tag = api.hashtag(name=hashtag)
print(tag.info())
lst = []
for video in tag.videos(count=count):
lst.append(video.author.as_dict)
df = pd.DataFrame(lst)
print(df)
The above code for hashtag "ugc" produces only 482 results. Whereas I know there is significantly more results available from TikTok.
It’s ratelimit, TikTok will block you after a certain amount of requests.
Add proxy support like this:
proxies = open('proxies.txt', 'r').read().splitlines()
proxy = random.choice(proxies)
proxies = {'http': f'http://{proxy}', 'https': f'https://{proxy}'}
Could be more precise but it's a way to rotate them
With reference to https://docs.pro.coinbase.com/#get-account-history
HTTP REQUEST
GET /accounts//holds
I am struggling to produce python code to get the account holds via API pagination request and I could not find any example of implementing it.
Could you please advise me on how to proceed with this one?
According to Coinbase Pro documentation, pagination works like so (example with the holds endpoint):
import requests
account_id = ...
url = f'https://api.pro.coinbase.com/accounts/{account_id}/holds'
response = requests.get(url)
assert response.ok
first_page = response.json() # here is the first page
cursor = response.headers['CB-AFTER']
response = requests.get(url, params={'after': cursor})
assert response.ok
second_page = response.json() # then the next one
cursor = response.headers['CB-AFTER']
# and so on, repeat until response.json() is an empty list
You should properly wrap this into helper functions or class, or, even better, use an existing library and save dramatic time.
I'm trying to write a react native app which will stream some tracks from Soundcloud. As a test, I've been playing with the API using python, and I'm able to make requests to resolve the url, pull the playlists/tracks, and everything else I need.
With that said, when making a request to the stream_url of any given track, I get a 401 error.
The current url in question is:
https://api.soundcloud.com/tracks/699691660/stream?client_id=PGBAyVqBYXvDBjeaz3kSsHAMnr1fndq1
I've tried it without the ?client_id..., I have tried replacing the ? with &, I've tried getting another client_id, I've tried it with allow_redirects as both true and false, but nothing seems to work. Any help would be greatly appreciated.
The streamable property of every track is True, so it shouldn't be a permissions issue.
Edit:
After doing a bit of research, I've found a semi-successful workaround. The /stream endpoint of the API is still not working, but if you change your destination endpoint to http://feeds.soundcloud.com/users/soundcloud:users:/sounds.rss, it'll give you an RSS feed that's (mostly) the same as what you'd get by using the tracks or playlists API endpoint.
The link contained therein can be streamed.
Okay, I think I have found a generalized solution that will work for most people. I wish it were easier, but it's the simplest thing I've found yet.
Use API to pull tracks from user. You can use linked_partitioning and the next_href property to gather everything because there's a maximum limit of 200 tracks per call.
Using the data pulled down in the JSON, you can use the permalink_url key to get the same thing you would type into the browser.
Make a request to the permalink_url and access the HTML. You'll need to do some parsing, but the url you'll want will be something to the effect of:
"https://api-v2.soundcloud.com/media/soundcloud:tracks:488625309/c0d9b93d-4a34-4ccf-8e16-7a87cfaa9f79/stream/progressive"
You could probably use a regex to parse this out simply.
Make a request to this url adding ?client_id=... and it'll give you YET ANOTHER url in its return json.
Using the url returned from the previous step, you can link directly to that in the browser, and it'll take you to your track content. I checked on VLC by inputting the link and it streams correctly.
Hopefully this helps some of you out with your developing.
Since I have the same problem, the answer from #Default motivated me to look for a solution. But I did not understand the workaround with the permalink_url in the steps 2 and 3. The easier solution could be:
Fetch for example user track likes using api-v2 endpoint like this:
https://api-v2.soundcloud.com/users/<user_id>/track_likes?client_id=<client_id>
In the response we can finde the needed URL like mentioned from #Default in his answer:
collection: [
{
track: {
media: {
transcodings:[
...
{
url: "https://api-v2.soundcloud.com/media/soundcloud:tracks:713339251/0ab1d60e-e417-4918-b10f-81d572b862dd/stream/progressive"
...
}
]
}
}
...
]
Make request to this URL with client_id as a query param and you get another URL with that you can stream/download the track
Note that the api-v2 is still not public and the request from your client probably will be blocked by CORS.
As mentioned by #user208685 the solution can be a bit simpler by using the SoundCloud API v2:
Obtain the track ID (e.g. using the public API at https://developers.soundcloud.com/docs)
Get JSON from https://api-v2.soundcloud.com/tracks/TRACK_ID?client_id=CLIENT_ID
From JSON parse MP3 progressive stream URL
From stream URL get MP3 file URL
Play media from MP3 file URL
Note: This link is only valid for a limited amount of time and can be regenerated by repeating steps 3. to 5.
Example in node (with node-fetch):
const clientId = 'YOUR_CLIENT_ID';
(async () => {
let response = await fetch(`https://api.soundcloud.com/resolve?url=https://soundcloud.com/d-o-lestrade/gabriel-ananda-maceo-plex-solitary-daze-original-mix&client_id=${clientId}`);
const track = await response.json();
const trackId = track.id;
response = await fetch(`https://api-v2.soundcloud.com/tracks/${trackId}?client_id=${clientId}`);
const trackV2 = await response.json();
const streamUrl = trackV2.media.transcodings.filter(
transcoding => transcoding.format.protocol === 'progressive'
)[0].url;
response = await fetch(`${streamUrl}?client_id=${clientId}`);
const stream = await response.json();
const mp3Url = stream.url;
console.log(mp3Url);
})();
For a similar solution in Python, check this GitHub issue: https://github.com/soundcloud/soundcloud-python/issues/87