I was handed a D9 site. Have not touched Drupal since D7. Anyway, we have a client that is sending data to an endpoint on our sided. Here are the comments/notes that I got:
MarktCom populates some hidden vars on their site and sends them to the endpoint like this /Casingo/api/post
Casingo gets those values and contacts MarktCom's api with the clientID.
MarktCom then returns the member data
I realize this is probably a stupid question, but how do I find/get/see the data that MarktCom sent? I have gone thru the custom module. I see the endpoint url (above), I see the api data path and api domain. but I don't see the data when I dig into the directory (i'm assuming it probably a .json file.). I can see the ultimate .csv files that are being created from the data, but not the actual data.
Lots has happen due to the issues of the past couple of years, so getting to anyone on their side that knows what going has been hard/impossible. The overarching issue is that the .csv file, a couple of months ago, the data/info started all duplicating, so I want to see what we are getting so I can determine where to start fix.
And ideas on how I can investigate this would be much appreciated.
Related
I integrated Zapier into my application. At first, everything was working. I was able to create some Zaps and receive some information. After a few days, I created a new Zap, but this time it didn't work. I got in touch with Zapier's support and they informed me that the problem is with the ID I was providing them. It contains some extra characters.
`\ufeff{"id":"23"}`
Does anyone know what I can do to solve this problem? When I test with Postman, I don't see any extra characters.
Does anybody know where documentation for the Metacritic api is/if it still works. There used to be a Metacritic API at https://market.mashape.com/byroredux/metacritic-v2#get-user-details which disappeared today.
Otherwise I'm trying to scrape the site myself but keeping getting a blocked by a 429 Slow down. I got data like 3 times this hour and haven't been able to get anymore in the last 20 minutes which is making testing difficult and application possibly useless. Please let me know if there's anything else I can be doing to scape I don't know about.
I was using that API as well for an app I wrote a while ago. Looks like the creator removed it from Mashape. I just sent him an email to ask whether it'll be back up. I did find this scraper online. It only has a few endpoints but following the examples given you could easily add more. Let me know if you make any progress!
Edit: Looks like CBS requested it to be taken down. The ToS prohibits scraping:
[…] you agree not to do the following, or assist others to do the following:
Engage in unauthorized spidering, “scraping,” data mining or harvesting of Content, or use any other unauthorized automated means to gather data from or about the Services;
Though I was hoping for a Javascript way of doing this, the creator of the API also told me some info.
He says I was getting blocked for not having a User agent in the header and should use a 429 handling procedure i.e. re-request with longer pauses in between.
A PHP plugin available as well: http://datalinx.io/shop/metacritic-api/
I had to add a user agent like JCDJulian said and now it allows me to scrape. So for Ruby:
agent = Mechanize.new
agent.user_agent_alias = "Mac Firefox"
Then it stopped giving me the 403 Forbidden error.
I made a program that gets the data from the clipboard and saves it in a string variable. Then it looks for specific words in that string and generates several URLs. Afterwards it open the browser and shows each URL in an own tab.
Some of my friends already use this program frequently and I want to have some statistics about how often. I simple counter variable would be enough but I need to get access to it.
I came up with two options that could work:
I could send an email to a specific adress every time my app is executed. Then I can track the amount of uses by manually or automaticly counting the amount of emails in the postbox. I think this would be a Vers dirty solution.
I could create and publish a website containing a counter. This counter could be refreshed by my application. This solution is a bit better I think but a lot more work for just one single counter.
Do you have better ideas to solve my problem or is one of mine already a good one?
Thank you in advace!
You can use Measurement Protocol Overview. This provides you statistics of usage your application compared with Google Analytics. You can see even a geo statistic, version distribution, crash reports. It is easy to use it from .net. It is just about requesting http request to google.
Using the Contact API v3 I had a working implementation for uploading a photo to an existing contact.
Since a couple of weeks this fails with 404. The implementation has not been changed when the API servers started to sent back 404s and I don't see any indication what exactly changed and would result now in the 404s.
I'm using HTTP PUT + the photo URL of the contact.
One interesting observation I made was that the contact's self-URL changes which each request (the provided details are still always the same and correct).
Did anyone notice something similar ?
Edit: Link to issue: http://code.google.com/a/google.com/p/apps-api-issues/issues/detail?id=3301&q=contact&colspec=API%20ID%20Type%20Status%20Priority%20Stars%20Opened%20Summary
tried different photo formats and sizes, different content types and even photos which had been uploaded previously (when it was still working). Nothing changed the behaviour of returning 404.
w.r.t to change contact ids: the contact ID changes between API invocations. I first thought it could be related to reopened connection( no keep-alive) that contact ids change. However what speaks against this being the cause of the issue is that first retrieving a contact and then editing a contact's address is possible without any issues.
authentication does not seem to be problem as well - otherwise editing a contact's address would not work as well.
PS: I'm using the JSON output format when retrieving the contact.
PS2: s/GET/PUT in step 3 ( I tried to change PUT to GET to see if it still returns 404... which it does).
PS3: am not using any client library but implement the protocol directly (which should not be relevant for the HTTP PUT on the photo link
After hours of investigation I found out that this is particular an issue using OAuth1. Using OAuth2 the exact same photo links which had been returned when requesting a specific contact record using OAuth1 work and return the photo data on HTTP GET. I expect HTTP PUT for photo links using OAuth2 to succeed as well.
Remains open if if there's some kind of workaround for OAuth1.
I have a client who uploads his properties into 3rd party software created by a company called 'estates it', who then send that file, as a .blm to rightmove who process it.
This client wants us to take that .blm file and output the data into a new designed site we are doing. Does anyone know of methods or experience in doing this or working with .blm files? Which is a static file as far as I know.
I don't suppose this will be of help to you anymore, but if anyone else is looking for ways to handle the rightmove BLM file:
You can find several solutions implemented here
Using this repo, for example, you can just:
$blm = new \BLM\Reader(dirname(__FILE__) . '/test.blm')
var_dump($blm->toArray());
Most solutions I've seen are implemented in PHP though, and seem to be pretty old.
In the case the website is a wordpress, there seem to be a couple puglins for this too (example).
I'm not too sure Rightmove still outputs a BLM on the latest version of its api.
This is a pretty simple process. We are currently working the other way. ie our system manages an estate agents property listings and sends them to rightmove for listing.
The rightmove API definition can be found at www.rightmove.co.uk/adf.html
The file comprises a definition and data file within each blm, this should make it quite straight forward to manage