I am new to programming and I am trying to practice using APIs from third party websites to pull data to my own site. I am trying to use the Glassdoor API but I don't know where I find the required values for UserAgent and the UserIP. This is Glassdoor's explanation:
userip The IP address of the end user to whom the API results will be shown.
useragent The User-Agent (browser) of the end user to whom the API results will be shown. Note that you can can obtain this from the "User-Agent" HTTP request header from the end-user.
Note that for now I just want to pull some data onto a test website on my own computer and simply print the JSONP results onto the page. Where do I find these values? Thanks!
I was having the same trouble and think I can answer your question:
You can find your user IP address by simply Googling "what is my IP address." Google will give you the correct answer in the results or you can use another site like http://www.whatismyip.com/
Your user agent info is basically the info that identifies which browser is being used, what version, and on which operating system. You can also google this and be provided with the string or you can open up your console (right click in the browser window --> inspect elements --> console) and type navigator.userAgent which will give you the info you seek.
Hope this helps!
You can put your IP address lookup in your code rather than googling it. For example, in R:
Require(RCurl)
Require(rjson)
str_info <- getURL('http://ipinfo.io')
data <- fromJSON(str_info)
userip <- paste("userip=",data$ip, sep = "")
Related
When I search something on google.com, I see interaction with the following IP address: 172.217.7.132
But when I attempt to reverse lookup the ip address, I get iad30s08-in-f132.1e100.net. and iad30s08-in-f4.1e100.net., not google.com.
What do I need to do in order to correctly identify that this IP address is resolved by google.com.
EDIT
Clarifying the question: My problem is not specific to google.com. I want to programmatically/logically arrive at google.com because that's what my browser requested for.
Same problem exists in the case of amazon: The IP address it resolves to, on reverseDNS gives me: server-13-32-167-140.sea19.r.cloudfront.net. instead of amazon.com
Code for performing reverse lookup:
In [1]: def reverse_lookup(ip_address):
...: from dns import reversename, resolver
...: domain_address = reversename.from_address(ip_address)
...: return [answer.to_text() for answer in resolver.query(domain_address, "PTR")]
As others have mentioned, 1e100.net does, in fact, belong to google. Their reverse DNS is going to resolve to whatever they want it to resolve to, and there's not much you can do about that.
Depending on your requirements, another alternative may be using a geolocation database to gather more information about an IP. You can find a demo of this here:
https://www.maxmind.com/en/geoip-demo
(enter your example address 172.217.7.132 in the form)
MaxMind has various products (some free, some commercial), so one of them may fit your needs of being able to look up this info programatically.
A different possible solution would be to get access to a WHOIS API, such as:
https://hexillion.com/whois
Example results:
https://hexillion.com/samples/WhoisXML/?query=172.217.7.132&_accept=application%2Fvnd.hexillion.whois-v2%2Bjson
https://support.google.com/faqs/answer/174717
1e100.net is a Google-owned domain name used to identify the servers in our network.
Following standard industry practice, we make sure each IP address has a corresponding hostname. In October 2009, we started using a single domain name to identify our servers across all Google products, rather than use different product domains such as youtube.com, blogger.com, and google.com.
Typically, you will get a 1e100.net result when you do a reverse lookup on one of their IPs. Consider it as good as a google.com result would be - you've verified that the IP is controlled by Google if you see it.
One exception to this is the Googlebot crawler, which may return google.com or googlebot.com results. (I would expect this to eventually get moved over to 1e100.net in the future.)
I am trying to access the Open edX data Analytics API v0 alpha as I would like to download the problem grades data.
In the documentation on setting up the API it mentioned Test the Data Analytics API by "In a browser, go to: http://<server-name>:<port>/docs/#!/api/
Enter a valid key and click Explore."
May I know what is the server-name and port number here refers to?
Also what is the Docs/#! refers to here.
I tried to look for API url online, but could not find it either.
Also I am assuming I need to get authorization through Oauth2 as well.
As this is the first time I am trying to access API to download data, i would really appreciate your help with the questions above
in this
http://<server-name>:<port>/docs/#!/api/
<server-name>
refers to the ip address of the server where you are running the edx-analytics
<port>
refers to port
sample url could be http://192.168.10.110:8085/docs/#!/api/
I have a vb.NET App that uses System.Net.WebClient to query an API. I'm able to get the information I'm requesting just fine.
The people that supply the API are requesting that I
"set a custom User header when requesting data to determine the source application."
Am I supposed to pre-send something first, or append something to the url for the WebClient to processes? The API only accepts get requests and it doesn't have a parameter for an identification.
I'm stuck in terminology here. A search for that phrase, here, came up with server-side topics so I don't know what to look for. Can someone translate?
I have a product code which I need to enter into 6 different websites in order to pull different information from them about the product. Is there away to save this product code into some sort of variable and pass it into each websites input box and it return all the information from each one automatically? Really have no idea where to go/start with this so if anyone can brainstorm a few ideas to get me moving that would be great.
In order get what you are planning for:
You need a script which visits the specified web site,
then at the website, you can get the element by tag.
For instance in javascript,
var textBox = document.getElementByTag(Input);
This will give you a reference to text field to enter the text. It can be done as follows:
textBox.value = "any string";
Once you have done this, you will have to retrieve the results from the page, based on the website layout.
So if you can specify about your work in detail, you would get better response.
Assuming you're talking about using an ordinary GUI browser, the best you can do is copy it to your system clipboard, and paste it into each page on the browser.
If you're talking about a programmatic web-access like wget or curl, it depends on what language you are writing your script in.
you have to create the web request for each web site and find a way to parse the response which will be HTML
have a look at the HttpWebRequest you can find lots of example on internet that shows how you can create an HTTP POST to a website.
http://www.terminally-incoherent.com/blog/2008/05/05/send-a-https-post-request-with-c/
Anyone know the http request in order to find the subscriber count for a subscription?
Something similar to http://www.google.com/reader/api/0/unread-count?all=true must exist for subscribers, no?
This is what Reader uses to fill in the data in the "show details" UI for a particular feeed:
http://www.google.com/reader/api/0/stream/details?s=feed/https://blog.stackoverflow.com/feed/&output=json&fetchTrends=false
Specifically, you'd want the "subscribers" property in the JSON output. The general format for the "s" parameter is feed/". Note that it requires authentication.
I wanted to do this same thing but from PUBLIC location so i hacked together an API for everyones use... it returns as JSON or JSONP and i dont save any history of feeds that are looked up... but i do have logging to be able to ban abusive ip