hereNow not working as expected - api

I've got that they changed their api according to this: https://groups.google.com/forum/?fromgroups#!topic/foursquare-api/sQMuHlv9wiU
But it sais I should be able to get people near-by if I'm checked-in to the specific place. However it gives me back the same response before check-in and after check-in ...

Edit: A couple other devs have run into this issue, and in many cases, it looks like the issue is that lat/lng isn't being passed into the checkins/add request (the "ll" param). You're not included in hereNow unless you're deemed to be plausibly close to the venue.
You should be able to get other people checked into the venue after having checked in, but remember, there might not always be other people checked in.
In general, for the venue you've checked in at, you should see roughly the same number of people listed in the hereNow block as the "count" field shows. However, people can opt out of showing up in hereNow, so you will still occasionally see fewer people than the count field shows.
Remember that you're only considered "checked in" to a venue if (a) it's been less than 3 hours since you checked in and (b) you haven't checked in anywhere else.
If you're still having problems, email api (at) foursquare.com with the specific request to hereNow made before checking in, the request you made to check in, followed by the request to hereNow you made after checking in, and we can help you debug.

Related

Sonos player not calling getExtendedMetadata after rating item

I am implementing two-button rating for tracks. When the user clicks vote up/down, rateItem gets called, and my server returns an empty rateItemResponse (I have defined AutoSkip and OnSuccessMessageId in the presentation map). Immediately afterwards, getLastUpdate is called, and my server returns a response with the favorites value incremented. However, after the getLastUpdate response is returned, getExtendedMetadata is not called on the rated track to get the new user rating. What is the specific setup in order for getExtendedMetadata to be called after seeing an updated favorites value?
There are a few things you need to do to get this going, not all of which you mention in your queston. So, in case you haven't done this already, run the test as described here: https://musicpartners.sonos.com/node/376
Specifically note this section on that page:
There is another test, test_meta_data, in the ratings fixture that verifies that both getExtendedMetadata and getMediaMetadata are implemented correctly. This means that when these SMAPI requests are made with the object ID's listed in the Self-Test config for Test Track and the responses should contain a dynamic tag as part of the mediaMetadata. Inside the dynamic tag you must set property tags, each of which should contain a name and a value which is mapped to in the presentation map.
Also, maybe Sonos doesn't call extendedMetadata because there were no dynamic or property tags (or something) in the initial call's answer (but still the getLastUpdate call is supposted handle that, I think).
EDIT:
This seems to me to be a bug in the sonos customsd system. It should send a getLastUpdate request after a rateItem response, but it doesn't. I expect that this is a known bug, but since I cannot find any SMAPI bug reports sites monitored by Sonos, I am not sure. At any rate, if you are planning to submit you music service to Sonos, they will test it and let you know whether this is also a problem in production.
Any Sonos employees that can shed light on this? (Since the move to Stackoverflow, it seems nearly impossible to contact anybody from Sonos....)

Instagram OAuth Returning "No matching code found" At Random Times

We seem to be experiencing a really strange problem when attempting to retrieve Instagram access tokens on the server side.
We're seeing "No matching code found" errors at random times, but when it does happen it seem to be clumped, as in these errors aren't spread throughout the day but only seem to be within a random 15 minute or so period late in the night, and it's only happening for a very small percentage of users during that time as we can tell.
We've looked at other possibilities, access token request IP seemed to be one possibility, however this issue is not consistent across all users in the time frame of which these "No matching code found" 400 errors are being returned during the access token request.
Has anyone experienced this before? Any ideas? Also to note, our application sees thousands of users logging in per day at any given time, so the randomness of the timing occurrence doesn't make much sense.
It looks like users get more than one code, and you see first code, but need second. Try relogin users, if you gets error. User will not see instagram page with confirm button, just redirections.
Possible algorithm of error:
1. User click auth link.
2. Get first code.
3. User click auth link (twice, redirection problem, public auth system, etc.)
4. Get another code (even on the same client_id, redirect_uri).
5. You get first code.
6. But first code already doesn't exists.

PayPal Error: 13122 "This transaction cannot be completed because it violates the PayPal User Agreement."

One of our users is getting an error when they attempt to make a purchase and I'm trying to identify why this is occurring.
The message returned from PayPal is:
<Errors xmlns="urn:ebay:apis:eBLBaseComponents">
<ShortMessage>Transaction refused</ShortMessage>
<LongMessage>This transaction cannot be completed because it violates the PayPal User Agreement.</LongMessage>
<ErrorCode>13122</ErrorCode>
<SeverityCode>Error</SeverityCode>
</Errors>
This product is working for other users, just not him.
Obviously it's violating the User Agreement but I'd like to identify why.
UPDATE
The users that are affected by it seem to all have one or more of the following: a non-UK email address, a non-UK PayPal account or a non-UK payment source.
We've not had a resolution yet, but have directed several users to contact PayPal directly. The feedback we've had is as follows:
"I tried with another paypal account, that failed too, despite being able to use both PayPal accounts to pay for other services."
"PayPal are aware of the error message, but they simply cannot explain why it's happening. After an hour on the phone with them today, they seem incapable of tracing the reason for this error."
Needless to say we've got some very frustrated users.
I had similar problem... Title of our item we were selling was:
Inc. Special Offer: <<Famous author>> eBook for only 10.00$
When we changed the title to something more descriptive, everything was OK
Did you open a ticket with MTS or call the Business Support line? If you did open a ticket can you please give me the ticket number? I'll take a look at it.
It may be something that you need to address with Business Support though.
I had this same error coming back for one of our customers and I had the same issue as #knagode with my product name.
For some reason "Dark Havana" in the product name set this off and produced this error. If I ever hear back from Paypal on why this phrase is not acceptable, I will circle back and edit this response.
It turns out that the value in "PaymentDetailsItem - Name" was one part of the problem. We didn't have any special characters (or so I thought) in there, just "product name: person name". When I spoke to PayPal support their response is as follows:
"I have checked it, it's a combination of many things that we cannot disclose.
However removing the ":" before the name PAYMENTREQUEST_0_NAME should work.
Can you send PAYMENTREQUEST_0_NAME=xxxx USER NAME
Instead of PAYMENTREQUEST_0_NAME=xxxx : USER NAME"
Once I had removed the ":" from that field, it started working again...bizarre!
This error still gets triggered with non-UK PayPal accounts but happens less frequently now.
For any others investigating this issue please be aware that PayPal has filters in place that looks for globally specific geographic terms. If any terms used in the product description trigger the filters, the transaction could be refused. In our case, we are online rug retailer with many of our products ranging across the traditional rug spectrum. We found that any rug with the term "Persia" in the title were being refused due to economic sanctions against nation states associated with that term.
The filters are not geographic only and other controversial terms could also trigger the filters to refuse the transaction. In one instance, we had a rug with "Confederate" in the product description that also triggered the filters.

/me/home doesn't return all the posts

I've noticed that me/home in the Graph API doesn't return posts by certain users. I've tried this in my app as well as just using Graph Explorer. It returns most posts, consistently fails to include posts by certain friends. I don't think this is a caching issue, because I've tried over a period of one day with same results. It's not random either. It's the same handful of posts that are always missing.
I checked the posts in question and don't see anything special. And it's happening with newer posts also.
Do I need to add any special parameters to my request ?
Users, in their Facebook privacy preferences, can configure what data 3rd party applications can see about themselves.
What you are seeing is that certain users have made their activities not viewable to your application.

How to skip known entries when syncing with Google Reader?

for writing an offline client to the Google Reader service I would like to know how to best sync with the service.
There doesn't seem to be official documentation yet and the best source I found so far is this: http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI
Now consider this: With the information from above I can download all unread items, I can specify how many items to download and using the atom-id I can detect duplicate entries that I already downloaded.
What's missing for me is a way to specify that I just want the updates since my last sync.
I can say give me the 10 (parameter n=10) latest (parameter r=d) entries. If I specify the parameter r=o (date ascending) then I can also specify parameter ot=[last time of sync], but only then and the ascending order doesn't make any sense when I just want to read some items versus all items.
Any idea how to solve that without downloading all items again and just rejecting duplicates? Not a very economic way of polling.
Someone proposed that I can specify that I only want the unread entries. But to make that solution work in the way that Google Reader will not offer this entries again, I would need to mark them as read. In turn that would mean that I need to keep my own read/unread state on the client and that the entries are already marked as read when the user logs on to the online version of Google Reader. That doesn't work for me.
Cheers,
Mariano
To get the latest entries, use the standard from-newest-date-descending download, which will start from the latest entries. You will receive a "continuation" token in the XML result, looking something like this:
<gr:continuation>CArhxxjRmNsC</gr:continuation>`
Scan through the results, pulling out anything new to you. You should find that either all results are new, or everything up to a point is new, and all after that are already known to you.
In the latter case, you're done, but in the former you need to find the new stuff older than what you've already retrieved. Do this by using the continuation to get the results starting from just after the last result in the set you just retrieved by passing it in the GET request as the c parameter, e.g.:
http://www.google.com/reader/atom/user/-/state/com.google/reading-list?c=CArhxxjRmNsC
Continue this way until you have everything.
The n parameter, which is a count of the number of items to retrieve, works well with this, and you can change it as you go. If the frequency of checking is user-set, and thus could be very frequent or very rare, you can use an adaptive algorithm to reduce network traffic and your processing load. Initially request a small number of the latest entries, say five (add n=5 to the URL of your GET request). If all are new, in the next request,
where you use the continuation, ask for a larger number, say, 20. If those are still all new, either the feed has a lot of updates or it's been a while, so continue on in groups of 100 or whatever.
However, and correct me if I'm wrong here, you also want to know, after you've downloaded an item, whether its state changes from "unread" to "read" due to the person reading it using the Google Reader interface.
One approach to this would be:
Update the status on google of any items that have been read locally.
Check and save the unread count for the feed. (You want to do this before the next step, so that you guarantee that new items have not arrived between your download of the newest items and the time you check the read count.)
Download the latest items.
Calculate your read count, and compare that to google's. If the feed has a higher read count than you calculated, you know that something's been read on google.
If something has been read on google, start downloading read items and comparing them with your database of unread items. You'll find some items that google says are read that your database claims are unread; update these. Continue doing so until you've found a number of these items equal to the difference between your read count and google's, or until the downloads get unreasonable.
If you didn't find all of the read items, c'est la vie; record the number remaining as an "unfound unread" total which you also need to include in your next calculation of the local number you think are unread.
If the user subscribes to a lot of different blogs, it's also likely he labels them extensively, so you can do this whole thing on a per-label basis rather than for the entire feed, which should help keep the amount of data down, since you won't need to do any transfers for labels where the user didn't read anything new on google reader.
This whole scheme can be applied to other statuses, such as starred or unstarred, as well.
Now, as you say, this
...would mean that I need to keep my own read/unread state on the client and that the entries are already marked as read when the user logs on to the online version of Google Reader. That doesn't work for me.
True enough. Neither keeping a local read/unread state (since you're keeping a database of all of the items anyway) nor marking items read in google (which the API supports) seems very difficult, so why doesn't this work for you?
There is one further hitch, however: the user may mark something read as unread on google. This throws a bit of a wrench into the system. My suggestion there, if you really want to try to take care of this, is to assume that the user in general will be touching only more recent stuff, and download the latest couple hundred or so items every time, checking the status on all of them. (This isn't all that bad; downloading 100 items took me anywhere from 0.3s for 300KB, to 2.5s for 2.5MB, albeit on a very fast broadband connection.)
Again, if the user has a large number of subscriptions, he's also probably got a reasonably large number of labels, so doing this on a per-label basis will speed things up. I'd suggest, actually, that not only do you check on a per-label basis, but you also spread out the checks, checking a single label each minute rather than everything once every twenty minutes. You can also do this "big check" for status changes on older items less often than you do a "new stuff" check, perhaps once every few hours, if you want to keep bandwidth down.
This is a bit of bandwidth hog, mainly because you need to download the full article from Google merely to check the status. Unfortunately, I can't see any way around that in the API docs that we have available to us. My only real advice is to minimize the checking of status on non-new items.
The Google API hasn't yet been released, at which point this answer may change.
Currently, you would have to call the API and dis-regard items already downloaded, which as you said isn't terribly efficient as you will be re-downloading items every time, even if you already have them.