I have searched GitHub to access the log details of pull request using the following: repo:organisation/repository
But couldn't get the pull requests of individual user.
How to access the log of pull requests of all users in GitHub repository?
If you want to see contributors to a repository, click the "contributors" link near the top of the page.
If you want to see who has done pull requests, click the pull requests tab. To see all pull requests go to the filter search bar and remove the text "is:open" and hit enter. This will show you all pull requests.
I am not sure that the exact functionality you are looking for exists.
You can use the GitHub API: List Pull Requests
GET /repos/:owner/:repo/pulls
See this perl script for instance, which lists the PRs of a repo.
my $urlPrefix = "https://api.github.com/repos/$owner/$repo";
my #prs = `curl -s $urlPrefix/pulls | grep '\"number\"'`;
Related
I've made an application that creates pull requests to update the dependencies in all of my org's repos when the repo "Alpha" gets a new tag. The process is triggered by our CI flow on Alpha. Other engineers here would like to upgrade this application so that whoever made the tag is also automatically added as a requested reviewer to all of the associated pull requests. I do not see any way to do this with the github REST api. So far I have:
GET tag by name -> tag object sha
GET tag (with obj sha) -> tagger name & tagger email
*************GAP**************
POST requested reviewer (with username) -> completed!
I can't see any good way to get a username from the REST api with the name and/or email. I could query commits from Alpha and filter them, BUT "person who tagged" != "person who made last commit AND I know that at least one of our more prolific taggers is sometimes logged in from different emails (web vs cli vs home machine, etc), so the app might miss them from time to time.
I think it may be possible to get what I want via the GraphQL api, but I'd really like to exhaust REST possibilities before I go down that road. Please shoot any ideas my way!
After gathering more information, it looks like it's possible, and even slightly more elegant than I anticipated. If I have the name of the tag (the 'ref'), I can get a specific commit with that rather than the SHA. the response for this commit includes author information that gives the login. I can then use this along with the pull number to request a reviewer.
If I execute a BigQuery job using the REST API (i.e. bigquery.googleapis.com) in the response I get back a selfLink that looks something like this:
https://bigquery.googleapis.com/bigquery/v2/projects/my-project/jobs/job_0123456789ABCDEF?location=EU
In the UI (ie.. console.cloud.google.com) I can see the very same job in the project's query history:
Is it possible to use the information within that API response and construct a URL that will allow a person to visit that URL in the browser and be taken directly to the information about that query in the UI? This would be really useful because we could log a message containing that URL so that anyone viewing the logs can see a user-friendly UI regarding that job.
I suspect the answer is "no" but just thought I'd ask.
I believe you can share this link:
https://console.cloud.google.com/bigquery?project=<my-project>j=<bq:<location>:<job_id>>&page=queryresults
For example: https://console.cloud.google.com/bigquery?project=my-project&j=bq:US:2846160a-9a13-4192-9bff-e691ff2adab6&page=queryresults
If a user has BQ Job List permission in that project, then when they open up the link they will be be able to see the query that was run in the UI, along with the job information.
But they can't see the query results, which is intended behavior. Instead they will get a warning:
Access Denied: User does not have permission to access results of another user's job.
I'm trying to get Excel to download a CSV file, from a link that changes by the day, with a click on a button. The thing is, it's locked behind an agreement-number, ID and password.
I, however, got two API tokens:
TheAppSecretToken
TheAgreementGrantToken
The link is:
https://secure.e-conomic.com/secure/generelt/exportdata2.asp?mode=doexport&kartotek=5&fradato=01-01-2017&tildato=01-02-2018&vcseparator=%3B&vcQualifier=%22
If people have another way, than using a VBA-code, to download this file with a click on a button, don't hold back with the suggestion.
I appreciate any help I can get, thank you. :-)
EDIT: It's not a duplicate for another question, as this uses Tokens, and or 3 login informations.
EDIT2: nvm. that the link is changing from day-to-day, I figured out that I can just put the date as far out in the future, as I like.
Edit:
I assume that the login call needs to be an POST request, but this is only a guess as I can't test it with the information you have given
Just change the loginBody with the information you need or change it to the format that the URL needs (like JSON) and you can send as many information as need
If needed you can also set more headers for any other tokens you have
URL = "YOURURL"
loginBody = "username=username&password=password&token=token"
HttpObj.Open("POST", URL)
HttpObj.SetRequestHeader("Content-Type","application/x-www-form-urlencoded")
HttpObj.Send(loginBody)
Old answer:
As I'm not allowed to comment it seems what you are trying to do is explained here:
How do i download a file using VBA (Without internet explorer)
net project for which I need to detect and parse changes made to a specific single text file in a repository between different pull requests.
I've been successfully able to access the pull requests and the commits using the Github API but I don't know how to retrieve the lines that changed in the last commit?
Is this possible using the API? What would be the best approach?
If not should I try to read the last two file versions and implement a differ algorithm locally?
Thanks!
A pull request contains a diff_url entry like
"diff_url": "https://github.com/octocat/Hello-World/pull/1347.diff"
You can do this for any commit. For example, to get the diff of commit 7fd1a60b01f91b3 on octocat's Hello-World it's https://github.com/octocat/Hello-World/commit/7fd1a60b01f91b314f59955a4e4d4e80d8edf11d.diff.
This also works for branches. Here's master on octocat's Hello-World. https://github.com/octocat/Hello-World/commit/master.diff.
The general form is:
https://github.com/<owner>/<repo>/commit/<commit>.diff
For private repositories:
curl -H "Accept: application/vnd.github.v3.diff" https://<personal access token>:x-oauth-basic#api.github.com/repos/<org>/<repo>/pulls/<pull request>
Also works with the normal cURL -u parameter.
See: https://docs.github.com/en/rest/reference/pulls#get-a-pull-request
The crux is in the requested media type. You can pass the Accept header with value
application/vnd.github.diff
see documentation. For full reference, a GET request with the above and Authorization header to https://api.github.com/repos/{orgName}/{repoName}/pulls/{prId} does the trick.
How does one write a script to download one's Google web history?
I know about
https://www.google.com/history/
https://www.google.com/history/lookup?hl=en&authuser=0&max=1326122791634447
feed:https://www.google.com/history/lookup?month=1&day=9&yr=2011&output=rss
but they fail when called programmatically rather than through a browser.
I wrote up a blog post on how to download your entire Google Web History using a script I put together.
It all works directly within your web browser on the client side (i.e. no data is transmitted to a third-party), and you can download it to a CSV file. You can view the source code here:
http://geeklad.com/tools/google-history/google-history.js
My blog post has a bookmarklet you can use to easily launch the script. It works by accessing the same feed, but performs the iteration of reading the entire history 1000 records at a time, converting it into a CSV string, and making the data downloadable at the touch of a button.
I ran it against my own history, and successfully downloaded over 130K records, which came out to around 30MB when exported to CSV.
EDIT: It seems that number of foks that have used my script have run into problems, likely due to some oddities in their history data. Unfortunately, since the script does everything within the browser, I cannot debug it when it encounters histories that break it. If you're a JavaScript developer, use my script, and it appears your history has caused it to break; please feel free to help me fix it and send me any updates to the code.
I tried GeekLad's system, unfortunately two breaking changes have occurred #1 URL has changed ( I modified and hosted my own copy which led to #2 type=rss arguments no longer works.
I only needed the timestamps... so began the best/worst hack I've written in a while.
Step 1 - https://stackoverflow.com/a/3177718/9908 - Using chrome disable ALL security protocols.
Step 2 - https://gist.github.com/devdave/22b578d562a0dc1a8303
Using contentscript.js and manifest.json, make a chrome extension, host ransack.js locally to whatever service you want ( PHP, Ruby, Python, etc ). Goto https://history.google.com/history/ after installing your contentscript extension in developer mode ( unpacked ). It will automatically inject ransack.js + jQuery into the dom, harvest the data, and then move on to the next "Later" link.
Every 60 seconds, Google will force you to re-login randomly so this is not a start and walk away process BUT it does work and if they up the obfustication ante, you can always resort to chaining Ajax calls and send the page back to the backend for post processing. At full tilt, my abomination script collected 1 page a second of data.
On moral grounds I will not help anyone modify this script to get search terms and results as this process is not sanctioned by Google ( though not blocked apparently ) and recommend it only to sufficiently motivated individuals to make it work for them. By my estimates it took me 3-4 hours to get all 9 years of data ( 90K records ) # 1 page every 900ms or faster.
While this thing is going, DO NOT browse the rest of the web because Chrome is running with no safeguards in place, most of them exist for a reason.
One can download her search logs directly from Google (In case downloading it using a script is not the primary purpose),
Steps:
1) Login and Go to https://history.google.com/history/
2) Just below your profile picture logo, towards the right side, you can find an icon for settings. See the second option called "Download". Click on that.
3) Then click on "Create Archive", then Google will mail you the log within minutes.
maybe before issuing a request to get the feed the script shuld add a User-Agent HTTP header of well known browser, for Google to decide that the request came from that browser.