I work with analytics for organizational YouTube channels for my employer. I've been successfully retrieving data using YouTube DATA API, however I am unable to obtain metrics from YouTube Analytics API (a'la reports.query).
Here are the particulars...
Authentication: SAML (organizational GSuite account)
Authorization: OAuth2
My organizational account is an "owner" (though not primary owner) of the YouTube channel.
Google Cloud Platform project: internal (organizational)
I've tried various SCOPE combinations.
When I specify ids="channel==<channel_id>", I receive a 403 (forbidden) response.
When I specify ids="channel==MINE", I get a 200 status with headers but depending on dimension and metric selections, either no records, or a single record with zeros for each metric. My suspicion is that "channel==MINE" is looking for "my" channel rather than the brand account's channel as it wouldn't know which brand channel otherwise.
My question is, how can I assure that my organizational account can obtain data using YouTube Analytics API as well as YouTube Reporting API? Is there an administrative site that assigns these access rights? If so, where is it?
The following code is essentially sample code from the API Explorer...
import os
import google_auth_oauthlib.flow
import googleapiclient.discovery
import googleapiclient.errors
scopes = ["https://www.googleapis.com/auth/youtube.readonly"]
def main():
# Disable OAuthlib's HTTPS verification when running locally.
# *DO NOT* leave this option enabled in production.
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
api_service_name = "youtubeAnalytics"
api_version = "v2"
client_secrets_file = "client_secret.json"
# Get credentials and create an API client
flow = google_auth_oauthlib.flow.InstalledAppFlow.from_client_secrets_file(
client_secrets_file, scopes)
# credentials = flow.run_console()
credentials = flow.run_local_server()
youtube_analytics = googleapiclient.discovery.build(
api_service_name, api_version, credentials=credentials)
request = youtube_analytics.reports().query(
ids="channel==MINE",
startDate="2020-09-01",
endDate="2020-09-30",
dimensions="day",
metrics="views",
)
response = request.execute()
print(response)
if __name__ == "__main__":
main()
The code as shown, yields...
{'kind': 'youtubeAnalytics#resultTable', 'columnHeaders': [{'name': 'day', 'columnType': 'DIMENSION', 'dataType': 'STRING'}, {'name': 'views', 'columnType': 'METRIC', 'dataType': 'INTEGER'}], 'rows': []}
With ids set to "channel==UC6L0DBYWqAkmwfawTUMaR3g", the followiing is returned...
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://youtubeanalytics.googleapis.com/v2/reports?ids=channel%3D%3DUC6L0DBYWqAkmwfawTUMaR3g&startDate=2020-09-01&endDate=2020-09-30&dimensions=day&metrics=views&alt=json returned "Forbidden">
Related
I'm trying to fetch email list from government tenant via graph api and it worked fine until last week. I'm using client credentials flow. Last week i started to get the following error when trying to authorize my app in government tenants:
oauthlib.oauth2.rfc6749.errors.InvalidClientIdError: (invalid_request) AADSTS900441: Requests to applications hosted in the public cloud are not supported for USGov tenants.
Is there a way to authorize application from public azure cloud to read data from government tenant?
EDIT: code example and debug logs
from oauthlib.oauth2 import BackendApplicationClient
client = BackendApplicationClient(client_id=config.CLIENT_ID)
MSGRAPH = requests_oauthlib.OAuth2Session(
client=client
)
token = MSGRAPH.fetch_token(
'https://login.microsoftonline.us' + '/<tenant>' + config.TOKEN_ENDPOINT,
client_id=config.CLIENT_ID,
client_secret=config.CLIENT_SECRET,
include_client_id=True,
scope=['https://graph.microsoft.us/.default'])
endpoint = config.RESOURCE + config.API_VERSION + '/users'
graphdata = MSGRAPH.get(endpoint).json()
DEBUG:requests_oauthlib.oauth2_session:Requesting url https://login.microsoftonline.us/<tenant-id>/oauth2/v2.0/token using method POST.
DEBUG:requests_oauthlib.oauth2_session:Supplying headers {u'Content-Type': u'application/x-www-form-urlencoded;charset=UTF-8', u'Accept': u'application/json'} and data {u'client_secret': u'...', u'grant_type': u'client_credentials', u'client_id': u'...', u'scope': u'https://graph.microsoft.us/.default'}
DEBUG:requests_oauthlib.oauth2_session:Passing through key word arguments {'verify': True, 'json': None, 'proxies': None, 'timeout': None, 'auth': None}.
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): login.microsoftonline.us:443
DEBUG:urllib3.connectionpool:https://login.microsoftonline.us:443 "POST /<tenant-id>/oauth2/v2.0/token HTTP/1.1" 400 522
DEBUG:requests_oauthlib.oauth2_session:Prepared fetch token request body grant_type=client_credentials&client_id=...&client_secret=...&scope=https%3A%2F%2Fgraph.microsoft.us%2F.default
DEBUG:requests_oauthlib.oauth2_session:Request to fetch token completed with status 400.
Basically i see this error when i'm trying to fetch access token. Adminconsent was already given to my application by tenant admin.
This code worked for Gov tenants for month or so and suddenly stopped to work.
AAD started enforcing this about a month ago, GCC High/DoD tenants cannot use confidential apps published in commercial cloud. You need to publish your app from a GCC High/DoD tenant.
I have an app where the user details are passed as a JWT containing information about the current user and it's roles.
Everytime the user is logged in (via a KeyCloak instance), the information from the JWT is parsed on my end in a function that updates the user object via SQLAlchemy. However, since there is no user object being passed back and forth in the backend, I have to parse the JWT for roles for every action that requires it. I also have a need for auditing, and due to the structure of the app, this module does not necessarily have access to the request objects at the time of logging.
I'm looking for a neat way to make something like flask_users current_user() functionality work by mapping JWT -> ORM user object, to be able to transparently get the current user. Is there any way to go about this? The user registration and so on is completely separate from the app, and Flask only knows which user it is based on tokens in the requests that are being sent.
TLDR; Is there a way to load a user from the DB based on an already issued JWT (which contains information to map to a user), and is there perhaps already a lib or extension to flask that supports this?
I use a decorator to parse the JWT token using pyjwt.
Then from the parsed token you can get the user and do the proper authorization.
If you don't want to add the decorator to all your functions that require authorization you can use Flasks before_request.
from functools import wraps
from flask import Response, current_app, request
from jwt import decode
from jwt.exceptions import (DecodeError, ExpiredSignatureError,
InvalidSignatureError)
def authorize(func):
#wraps(func)
def check_authorization(*args, **kwargs):
try:
jwt_token = request.cookies.get('auth_token') # get token from request
if jwt_token is None:
return Response(status=401, headers={'WWW-Authenticate': 'Bearer'})
token = decode(
jwt_token,
key='pub_key', # public key to validate key
algorithms=['RS256'], # list of algs the key could be signed
verify=True
)
# you can call another function to do check user roles here
# e.g authorize(token['sub'])
return func(*args, **kwargs)
except (InvalidSignatureError, DecodeError, ExpiredSignatureError):
return Response(
response='{ "error": "token_invalid"}',
status=401,
headers={'WWW-Authenticate': 'Bearer'})
return check_authorization
This is supported with flask-jwt-extended: https://flask-jwt-extended.readthedocs.io/en/stable/complex_objects_from_token/
I'm building a website and I'm using the Spotify API as a music library. I would like to add more filters and order options to search traks than the api allows me to so I was wondering what track/song data can I save to my DB from the API, like artist name or popularity.
I would like to save: Name, Artists, Album and some other stuff. Is that possible or is it against the terms and conditions?
Thanks in advance!
Yes, it is possible.
Data is stored in Spotify API in endpoints.
Spotify API endpoint reference here.
Each endpoint deals with the specific kind of data being requested by the client (you).
I'll give you one example. The same logic applies for all other endpoints.
import requests
"""
Import library in order to make api calls.
Alternatively, ou can also use a wrapper like "Spotipy"
instead of requesting directely.
"""
# hit desired endpoint
SEARCH_ENDPOINT = 'https://api.spotify.com/v1/search'
# define your call
def search_by_track_and_artist(artist, track):
path = 'token.json' # you need to get a token for this call
# endpoint reference page will provide you with one
# you can store it in a file
with open(path) as t:
token = json.load(t)
# call API with authentication
myparams = {'type': 'track'}
myparams['q'] = "artist:{} track:{}".format(artist,track)
resp = requests.get(SEARCH_ENDPOINT, params=myparams, headers={"Authorization": "Bearer {}".format(token)})
return resp.json()
try it:
search_by_track_and_artist('Radiohead', 'Karma Police')
Store the data and process it as you wish. But you must comply with Spotify terms in order to make it public.
sidenote: Spotipy docs.
I am trying to add users to my Google Analytics account through the API but the code yields this error:
googleapiclient.errors.HttpError: https://www.googleapis.com/analytics/v3/management/accounts/**accountID**/entityUserLinks?alt=json returned "Insufficient Permission">
I have Admin rights to this account - MANAGE USERS. I can add or delete users through the Google Analytics Interface but not through the API. I have also added the service account email to GA as a user. Scope is set to analytics.manage.users
This is the code snippet I am using in my add_user function which has the same code as that provided in the API documentation.
def add_user(service):
try:
service.management().accountUserLinks().insert(
accountId='XXXXX',
body={
'permissions': {
'local': [
'EDIT',
]
},
'userRef': {
'email': 'ABC.DEF#gmail.com'
}
}
).execute()
except TypeError, error:
# Handle errors in constructing a query.
print 'There was an error in constructing your query : %s' % error
return None
Any help will be appreciated. Thank you!!
The problem was I using a service account when I should have been using an installed application. I did not need a service account since I had access using my own credentials.That did the trick for me!
Also remember that you have to specify the scope you would like to use, this example here (using the slightly altered example by Google) defines by default two scopes which would NOT allow to insert users (as they both give read only permissions) and would result in "Error 403 Forbidden" trying so.
The required scope is given in the code below:
from apiclient.discovery import build
from googleapiclient.errors import HttpError
from oauth2client.service_account import ServiceAccountCredentials
def get_service(api_name, api_version, scopes, key_file_location):
"""Get a service that communicates to a Google API.
Args:
api_name: The name of the api to connect to.
api_version: The api version to connect to.
scopes: A list auth scopes to authorize for the application.
key_file_location: The path to a valid service account JSON key file.
Returns:
A service that is connected to the specified API.
"""
credentials = ServiceAccountCredentials.from_json_keyfile_name(
key_file_location, scopes=scopes)
# Build the service object.
service = build(api_name, api_version, credentials=credentials)
return service
def get_first_profile_id(service):
# Use the Analytics service object to get the first profile id.
# Get a list of all Google Analytics accounts for this user
accounts = service.management().accounts().list().execute()
if accounts.get('items'):
# Get the first Google Analytics account.
account = accounts.get('items')[0].get('id')
# Do something, e.g. get account users & insert new ones
# ...
def main():
# Define the auth scopes to request.
# Add here
# https://www.googleapis.com/auth/analytics.manage.users
# to be able to insert users as well:
scopes = [
'https://www.googleapis.com/auth/analytics.readonly',
'https://www.googleapis.com/auth/analytics.manage.users.readonly',
]
key_file_location = 'my_key_file.json'
# Authenticate and construct service.
service = get_service(
api_name='analytics',
api_version='v3',
scopes=scopes,
key_file_location=key_file_location)
profile_id = get_first_profile_id(service)
print_results(get_results(service, profile_id))
if __name__ == '__main__':
main()
Regards,
HerrB92
I'm setting up a (headless) web server that lets people build their own custom time-lapse movies.
Several people want to upload the time-lapse videos they make to YouTube.
Rather than download the video to that person's laptop,
and the that person manually uploads it to YouTube,
is there a way I can write some software on my web server to take that video file on my web server and upload it directly to that user's account on YouTube?
I've been told that asking my users for their YouTube handle and password is the Wrong Thing To Do, and I should be using the YouTube V3 API with Oauth.
I tried the techniques listed at
" I want to upload a video from my web page to youtube by using javascript youtube API ",
which seems to "work", but every time I had to download the video to that person's laptop and then uploading from the laptop to YouTube. Is there a way to tweak that system to upload directly from my server to YouTube?
I found some python code that (after I set up my client_secrets.json) lets me upload videos directly from my server directly to someone's YouTube account after that person did the Oauth authentication.
But the first time some new person tries to upload a video to some new YouTube account that my server has never dealt with before, it either
(a) pops open a web browser on my server, and then if I VNC to the server and type in a YouTube handle and password into that web browser, it gets authenticated -- but I'd rather not do that for every user.
(b) with the "--noauth_local_webserver" option, spits out a URL on the command line and waits. Then if I manually copy that URL and paste it into a web browser, log in to YouTube, copy-and-paste the token back into this application that is still waiting for input on the command line, that person gets authenticated. But I'd rather not do that for every user. I guess that would be OK if I could capture that URL in my cgi-bin script and stick it in a web page, and then later somehow get the authentication response and cram it back into this program, but how? I don't even see that print statement or the raw_input statement in this code.
#!/usr/bin/python
# https://developers.google.com/youtube/v3/code_samples/python#upload_a_video
# which is identical to the code sample at
# https://developers.google.com/youtube/v3/docs/videos/insert
import httplib
import httplib2
import os
import random
import sys
import time
from apiclient.discovery import build
from apiclient.errors import HttpError
from apiclient.http import MediaFileUpload
from oauth2client.client import flow_from_clientsecrets
from oauth2client.file import Storage
from oauth2client.tools import argparser, run_flow
# Explicitly tell the underlying HTTP transport library not to retry, since
# we are handling retry logic ourselves.
httplib2.RETRIES = 1
# Maximum number of times to retry before giving up.
MAX_RETRIES = 10
# Always retry when these exceptions are raised.
RETRIABLE_EXCEPTIONS = (httplib2.HttpLib2Error, IOError, httplib.NotConnected,
httplib.IncompleteRead, httplib.ImproperConnectionState,
httplib.CannotSendRequest, httplib.CannotSendHeader,
httplib.ResponseNotReady, httplib.BadStatusLine)
# Always retry when an apiclient.errors.HttpError with one of these status
# codes is raised.
RETRIABLE_STATUS_CODES = [500, 502, 503, 504]
# The CLIENT_SECRETS_FILE variable specifies the name of a file that contains
# the OAuth 2.0 information for this application, including its client_id and
# client_secret. You can acquire an OAuth 2.0 client ID and client secret from
# the Google Developers Console at
# https://console.developers.google.com/.
# Please ensure that you have enabled the YouTube Data API for your project.
# For more information about using OAuth2 to access the YouTube Data API, see:
# https://developers.google.com/youtube/v3/guides/authentication
# For more information about the client_secrets.json file format, see:
# https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
CLIENT_SECRETS_FILE = "client_secrets.json"
# This OAuth 2.0 access scope allows an application to upload files to the
# authenticated user's YouTube channel, but doesn't allow other types of access.
YOUTUBE_UPLOAD_SCOPE = "https://www.googleapis.com/auth/youtube.upload"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
# This variable defines a message to display if the CLIENT_SECRETS_FILE is
# missing.
MISSING_CLIENT_SECRETS_MESSAGE = """
WARNING: Please configure OAuth 2.0
To make this sample run you will need to populate the client_secrets.json file
found at:
%s
with information from the Developers Console
https://console.developers.google.com/
For more information about the client_secrets.json file format, please visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
""" % os.path.abspath(os.path.join(os.path.dirname(__file__),
CLIENT_SECRETS_FILE))
VALID_PRIVACY_STATUSES = ("public", "private", "unlisted")
def get_authenticated_service(args):
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE,
scope=YOUTUBE_UPLOAD_SCOPE,
message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, args)
return build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
http=credentials.authorize(httplib2.Http()))
def initialize_upload(youtube, options):
tags = None
if options.keywords:
tags = options.keywords.split(",")
body=dict(
snippet=dict(
title=options.title,
description=options.description,
tags=tags,
categoryId=options.category
),
status=dict(
privacyStatus=options.privacyStatus
)
)
# Call the API's videos.insert method to create and upload the video.
insert_request = youtube.videos().insert(
part=",".join(body.keys()),
body=body,
# The chunksize parameter specifies the size of each chunk of data, in
# bytes, that will be uploaded at a time. Set a higher value for
# reliable connections as fewer chunks lead to faster uploads. Set a lower
# value for better recovery on less reliable connections.
#
# Setting "chunksize" equal to -1 in the code below means that the entire
# file will be uploaded in a single HTTP request. (If the upload fails,
# it will still be retried where it left off.) This is usually a best
# practice, but if you're using Python older than 2.6 or if you're
# running on App Engine, you should set the chunksize to something like
# 1024 * 1024 (1 megabyte).
media_body=MediaFileUpload(options.file, chunksize=-1, resumable=True)
)
resumable_upload(insert_request)
# This method implements an exponential backoff strategy to resume a
# failed upload.
def resumable_upload(insert_request):
response = None
error = None
retry = 0
while response is None:
try:
print "Uploading file..."
status, response = insert_request.next_chunk()
if 'id' in response:
print "Video id '%s' was successfully uploaded." % response['id']
else:
exit("The upload failed with an unexpected response: %s" % response)
except HttpError, e:
if e.resp.status in RETRIABLE_STATUS_CODES:
error = "A retriable HTTP error %d occurred:\n%s" % (e.resp.status,
e.content)
else:
raise
except RETRIABLE_EXCEPTIONS, e:
error = "A retriable error occurred: %s" % e
if error is not None:
print error
retry += 1
if retry > MAX_RETRIES:
exit("No longer attempting to retry.")
max_sleep = 2 ** retry
sleep_seconds = random.random() * max_sleep
print "Sleeping %f seconds and then retrying..." % sleep_seconds
time.sleep(sleep_seconds)
if __name__ == '__main__':
argparser.add_argument("--file", required=True, help="Video file to upload")
argparser.add_argument("--title", help="Video title", default="Test Title")
argparser.add_argument("--description", help="Video description",
default="Test Description")
argparser.add_argument("--category", default="22",
help="Numeric video category. " +
"See https://developers.google.com/youtube/v3/docs/videoCategories/list")
argparser.add_argument("--keywords", help="Video keywords, comma separated",
default="")
argparser.add_argument("--privacyStatus", choices=VALID_PRIVACY_STATUSES,
default=VALID_PRIVACY_STATUSES[0], help="Video privacy status.")
args = argparser.parse_args()
if not os.path.exists(args.file):
exit("Please specify a valid file using the --file= parameter.")
youtube = get_authenticated_service(args)
try:
initialize_upload(youtube, args)
except HttpError, e:
print "An HTTP error %d occurred:\n%s" % (e.resp.status, e.content)
use "client_secrets.json"
configure credentials to generate it
https://console.developers.google.com/apis/credentials
{
"web":
{
"client_id":"xxxxxxxxxxxxxx",
"project_id":"xxxxxxxxxxxxxx",
"auth_uri":"https://accounts.google.com/o/oauth2/auth",
"token_uri":"https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs",
"client_secret":"xxxxxxxxxxxxxxxx",
"redirect_uris":["http://localhost:8090/","http://localhost:8090/Callback"],
"javascript_origins":["http://localhost"]
}
}
Very useful step-by-step guide about how to get access and fresh tokens and save them for future use using YouTube OAuth API v3. PHP server-side YouTube V3 OAuth API video upload guide.
https://www.domsammut.com/code/php-server-side-youtube-v3-oauth-api-video-upload-guide