hello friends i just started to use GitHub and i just want to know it is possible to download github repository to my local computer through by Using GitHub Api or Api libraries (ie. python library " pygithub3" for Github api)
Using github3.py you can clone all of your repositories (including forks and private repositories) by doing:
import github3
import subprocess
g = github3.login('username', 'password')
for repo in g.iter_repos(type='all'):
subprocess.call(['git', 'clone', repo.clone_url])
If you're looking to clone an arbitrary repository you can do this:
import github3
import subprocess
r = github3.repository('owner', 'repository_name')
subprocess.call(['git', 'clone', repo.clone_url])
pygithub3 has not been actively developed in over a year. I would advise not use it since it is unmaintained and missing a large number of the additions GitHub has made to their API since then.
As illustrated in this Gist, the simplest solution is simply to call git clone.
#!/usr/bin/env python
# Script to clone all the github repos that a user is watching
import requests
import json
import subprocess
# Grab all the URLs of the watched repo
user = 'jharjono'
r = requests.get("http://github.com/api/users/%s/subscriptions" % (user))
repos = json.loads(r.content)
urls = [repo['url'] for repo in repos['repositories']]
# Clone them all
for url in urls:
cmd = 'git clone ' + url
pipe = subprocess.Popen(cmd, shell=True)
pipe.wait()
print "Finished cloning %d watched repos!" % (len(urls))
This gist, which uses pygithub3, will call git clone on the repos it finds:
#!/usr/bin/env python
import pygithub3
gh = None
def gather_clone_urls(organization, no_forks=True):
all_repos = gh.repos.list(user=organization).all()
for repo in all_repos:
# Don't print the urls for repos that are forks.
if no_forks and repo.fork:
continue
yield repo.clone_url
if __name__ == '__main__':
gh = pygithub3.Github()
clone_urls = gather_clone_urls("gittip")
for url in clone_urls:
print url
Related
For some reason, the subtitles become the chapters in the PDF generated for my docs by ReadTheDocs.
Check it out here: https://yarsaw.namantech.me/_/downloads/en/latest/pdf/
Here's the code for the index file
######################
**Welcome to YARSAW!**
######################
YARSAW is an open source, free and easy to use API Wrapper for the `Random Stuff API`_.
***************
Overview
***************
Features:
* Wraps all of the `Random Stuff API <https://api-info.pgamerx.com>`_
* Async-ready
* Easy to use
* Saves you a lot of time
*****************
Installation
*****************
To install the latest stable version of YARSAW, run the following command:
.. code-block:: bash
python -m pip install yarsaw
To install a specific version of YARSAW, run the following command:
.. code-block:: bash
python -m pip install yarsaw==<version>
To install the beta version of YARSAW, run the following command:
.. code-block:: bash
python -m pip install git+https://github.com/BruceCodesGithub/yarsaw --upgrade
****************
Getting Started
****************
Get your API Keys
==================
1. Register to get an API Key at the `Random Stuff API resgistration page <https://api-docs.pgamerx.com/Getting%20Started/register/>`_. This is used for authentication.
2. Register at `RapidAPI <https://rapidapi.com/pgamerxdev/api/random-stuff-api>`_ for a RapidAPI Key and Account, and subscribe to the Random Stuff API. This is used to make requests to the Random Stuff API and keep track of them. You can go to `The RapidAPI Developer Dashboard <https://rapidapi.com/developer/apps>`_ after logging in, select an application, head over to security, and copy its key. This is your RapidAPI Key.
Examples
========
.. code-block:: python
import yarsaw
import asyncio # builtin, used for asynchronous calls
client = yarsaw.Client("your_api_key", "your_rapidapi_key")
async def joke():
joke = await client.get_joke() # get the joke in form of a dict
formatted_joke = yarsaw.Utils().format_joke(joke) # format the joke (optional)
print(formatted_joke) # print the joke
asyncio.get_event_loop().run_until_complete(joke()) # run the joke() function
Now just start reading the documentation!
****************
Contents
****************
.. tip::
The :doc:`client` page contains all of the methods you can use to interact with the Random Stuff API, so we recommend reading that first.
.. toctree::
:maxdepth: 1
:caption: Documentation
client
utils
.. toctree::
:maxdepth: 1
:caption: Other Pages and Resources
faq
changelog
Documentation Last Updated on |today|
I want Welcome to YARSAW to be the first chapter, the next file to be the next. Instead, the subtitles become the chapters, and all the other content becomes subtitle.
I followed convention syntax from https://readthedocs.org/projects/documentation-style-guide-sphinx/downloads/pdf/latest/.
It works fine on the docs page itself (https://yarsaw.namantech.me).
Changing heading signs does not work.
Full code: https://github.com/BruceCodesGithub/yarsaw
[SOLVED]
The problem was that the toctree was below headers, which caused issues. Moving to the top of the file solves it.
I need to read .git folder having below git remote url within my groovy script using grgit APIs.
url = git#github.com:***/****.git
Can you please help
I could do as below
import org.ajoberstar.grgit.Grgit
Grgit git = new Grgit()
def gitRoot = project.hasProperty('git.root') ? project.property('git.root') : project.rootProject.projectDir
git = Grgit.open(dir: gitRoot)
System.out.println("git.remote.list().url -->"+git.remote.list().url)
I already have pickle files worth of 300-400MBs in the google drive's Colab folder.
I want to read use it in Google colab, but unable to do it?
I tried
from google.colab import files
uploaded = files.upload()
#print(uploaded)
for name, data in uploaded.items():
with open(name, 'wb') as f:
#f.write(data)
print ('saved file', name)
But, it prompts to upload.
I already gave access to drive using:
from google.colab import auth
auth.authenticate_user()
Do I need to give access permission again??
Why it shows only datalab in folder?
$ !ls
> datalab
Do I need doanload the file again to the google colab notebook ??
You will need to use Python and change the current directory. For e.g.,
import os
os.chdir('datalab')
will take you inside the datalab folder. If you run !ls now, you will see the contents of the datalab folder. Then, you can once again change directories as long as you want.
I find it easiest to mount your Google Drive locally.
from google.colab import drive
drive.mount('/content/gdrive')
!ls # will show you can now access the gdrive locally
This mounts your google drive to the notebook so you can access documents in your google drive as if they were local. To access the "Colab Notebooks" part of your google drive use the following path:
GDRIVE_DIR = "gdrive/My Drive/Colab Notebooks/"
If you have your pickle files in the Colab Notebooks folder then you can load them in with:
import os
import pickle
filename = ... # The name of the pickle file in your Google Drive
data = pickle.load(os.path.join(GDRIVE_DIR, filename))
A tutorial on mounting your Google Drive and other methods can be found here
You can use pydrive for that. First, you need to find the ID of your file.
# Install the PyDrive wrapper & import libraries.
# This only needs to be done once per notebook.
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once per notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# Download a file based on its file ID.
#
# A file ID looks like: laggVyWshwcyP6kEI-y_W3P8D26sz
listed = drive.ListFile({'q': "title contains '.pkl' and 'root' in parents"}).GetList()
for file in listed:
print('title {}, id {}'.format(file['title'], file['id']))
You can then load the file using the following code:
from googleapiclient.discovery import build
drive_service = build('drive', 'v3')
import io
import pickle
from googleapiclient.http import MediaIoBaseDownload
file_id = 'laggVyWshwcyP6kEI-y_W3P8D26sz'
request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = downloader.next_chunk()
downloaded.seek(0)
f = pickle.load(downloaded)
I was trying to download file from my google drive to colaboratory.
file_id = '1uBtlaggVyWshwcyP6kEI-y_W3P8D26sz'
import io
from googleapiclient.http import MediaIoBaseDownload
request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = downloader.next_chunk()
downloaded.seek(0)
print('Downloaded file contents are: {}'.format(downloaded.read()))
doing so am getting this error:
NameError: name 'drive_service' is not defined
How to remove this error?
No installing/importing any library. Just put your file id at the end.
!gdown yourFileIdHere
Note: at the time of writing gdown library is preinstalled on Colab.
the easiest method to download a file from google drive to colabo notebook is via the colabo api:
from google.colab import drive
drive.mount('/content/gdrive')
!cp '/content/gdrive/My Drive/<file_path_on_google_drive>' <filename_in_colabo>
Remarks:
By the drive.mount(), you can access any file on your google drive.
'My Drive' is equivalent to 'Google Drive' on your local file system.
The file_path is surrounded with single quotes as the standard directory below the mount point ('My Drive') has a space, and you might also have spaces in your path elsewhere anyway.
Very useful to locate your file and get the file path is the file browser (activated by click on left arrow). It lets you click through your mounted folder structure and copy the file path, see image below.
Here's an easy way to get by. You may either use wget command or requests module in Python to get the job done.
# find the share link of the file/folder on Google Drive
file_share_link = "https://drive.google.com/open?id=0B_URf9ZWjAW7SC11Xzc4R2d0N2c"
# extract the ID of the file
file_id = file_share_link[file_share_link.find("=") + 1:]
# append the id to this REST command
file_download_link = "https://docs.google.com/uc?export=download&id=" + file_id
The string in file_download_link can be pasted in the browser address bar to get the download dialog box directly.
If you use the wget command:
!wget -O ebook.pdf --no-check-certificate "$file_download_link"
Step 1
!pip install -U -q PyDrive
Step 2
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once per notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
Step3
file_id = '17Cp4ZxCYGzWNypZo1WPiIz20x06xgPAt' # URL id.
downloaded = drive.CreateFile({'id': file_id})
downloaded.GetContentFile('shaurya.txt')
Step4
!ls #to verify content
OR
import os
print(os.listdir())
You need to define a drive API service client to interact with the Google drive API, for instance:
from googleapiclient.discovery import build
drive_service = build('drive', 'v3')
(see the notebook External data: Drive, Sheets, and Cloud Storage/Drive REST API)
I recommend you use Pydrive to download your file from google drive. I download 500MB dataset for 5s.
1. install Pydrive
!pip install PyDrive
2. OAouth
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
gauth = GoogleAuth()
gauth.LocalWebserverAuth()
drive = GoogleDrive(gauth)
3. code for download file from google drive
fileId = drive.CreateFile({'id': 'DRIVE_FILE_ID'}) #DRIVE_FILE_ID is file id example: 1iytA1n2z4go3uVCwE_vIKouTKyIDjEq
print fileId['title'] # UMNIST.zip
fileId.GetContentFile('UMNIST.zip') # Save Drive file as a local file
Cheer Mo Jihad
The --id argument has been deprecated so now you simply have to run:
! gdown 1uBtlaggVyWshwcyP6kEI-y_W3P8D26sz
If your file is stored in a variable you can run:
! gdown $my_file_id
You can also use my implementations on google.colab and PyDrive at https://github.com/ruelj2/Google_drive which makes it a lot easier.
!pip install - U - q PyDrive
import os
os.chdir('/content/')
!git clone https://github.com/ruelj2/Google_drive.git
from Google_drive.handle import Google_drive
Gd = Google_drive()
Gd.load_file(local_dir, file_ID)
You can simply copy all your google drive files and folders inside google colab and use it directly using this command
# import drive
from google.colab import drive
drive.mount('/content/drive')
This will ask you for permission after accepting you will have all your google drive inside your colab
If you want to use any file just copy the path
I have several versions of wkhtmltopdf libraries installed on my server. I want to be able to switch between them programmatically when I'm about to render them because we have several development teams and they use different versions of wkhtmltopdf. Different wkhtmltopdf version are giving totally different rendered results, which is weird. Is it possible to switch between them programmatically?
This is not a full code but i try to this type of code may it's work for you:
import os
from openerp import tools # this odoo config file master/openerp/tools/which.py
import subprocess
import logging
_logger = logging.getLogger(__name__)
def find_in_path(name):
path = os.environ.get('PATH', os.defpath).split(os.pathsep)
if tools.config.get('bin_path') and tools.config['bin_path'] != 'None':
path.append(tools.config['bin_path'])
return tools.which(name, path=os.pathsep.join(path))
def _get_wkhtmltopdf_bin():
return find_in_path('wkhtmltopdf')
wkhtmltopdf_state = 'install'
try:
process = subprocess.Popen(
[_get_wkhtmltopdf_bin(), '--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
print "prrrrrrrrrrrrrrrr", process.communicate()[0]
# here write your logic
#
#
#
#
except (OSError, IOError):
_logger.info('You need Wkhtmltopdf to print a pdf version of the reports.')