Twitter APIs - Twitter4j - sync issue? - authentication

I am using Twitter4J to retrieve user timelines, but it stopped working. The number of accepted requests is fine, but I get a autentication problem, probably related to clock sync?
INFO: Error while querying Twitter: 401:Authentication credentials (https://dev.twitter.com/pages/auth) were missing or incorrect. Ensure that you have set valid consumer key/secret, access token/secret, and the system clock is in sync.
{"request":"/1.1/statuses/user_timeline.json","error":"Not authorized."}
401:Authentication credentials (https://dev.twitter.com/pages/auth) were missing or incorrect. Ensure that you have set valid consumer key/secret, access token/secret, and the system clock is in sync.
{"request":"/1.1/statuses/user_timeline.json","error":"Not authorized."}
rateLimitStatus=RateLimitStatusJSONImpl{remaining=178, limit=180, resetTimeInSeconds=1432305852, secondsUntilReset=899}, version=3.0.5}
Not sure what to do then. ive tried already to sync my server with ntpdate ntp.ubuntu.com with no luck.

I think you are using SandBox(Build-in VM) of Cloudera/Hortonworks etc
I was also getting the same problem and was trying to sync my clock with 'time.windows.com' clock but I was failed to do. So I moved to 4 nodes cluster which was already existing in my case and there my clock was in sync and I could run my request to Twitter successfully.
Conclusion: Move from Cloudera/Hortonworks VM to own installed OS and make the clock sync.
Hope this help!!!

Related

Sonos API subscription callbacks stopped

I have perl on apache http service that's been working fine for several years to issue sonos cmds and receive callbacks. About two weeks ago, I stopped receiving any callbacks.
I subscribed successfully (response={}) for groupVolume, playbackMetadata, and playback events.
I am successfully getting webhook messages from other services (e.g., Vonage) using https, so it seems the port is open to my server, and apache is successfully processing these requests. I see no trace of any messages from the sonos api in my apache logs.
I have no trouble issuing commands (setMute, getFavorites, getPlaybackMetadata, etc.). Only the callbacks are a problem.
I ran the ssltools checker from digicert but found no issues.
I can't recall making any changes to the home router config.
Does anyone else have a problem like this or know how to diagnose what's happening?
I installed WireShark but am overwhelmed with the functionality and don't know how to narrow down what I should be looking for to see if the messages are being received and blocked somehow.
it may be unlikely, but is it possible that there isn't any usage of your integration that would result in callbacks being sent to your service? For example - if volume isn't being changed, or playback isn't happening, you won't receive events.
If that's not the case, additional information is required to debug this issue. Could you please email developer-feedback#sonos.com with the following information:
The name of your service/application
The date/time your service stopped receiving callback events. You said about two weeks ago, but could you be more specific?
The clientId used by your code. This is the UUID you generated when you initially created the "API Key" on developer.sonos.com. Format is xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx (note - we do not need the secret associated with this key).
With that information we should be able to determine the cause of your missing callbacks.

"The search engine appears to be down or failing to respond to the search query"

I've installed FusionAuth (awesome product) into a Docker Swarm cluster using the official docker-compose.yml file and everything seems to work brilliantly.
EXCEPT
Periodically, when a user goes to login they will be presented with the above error stating that the search engine is not available. If they try again immediately then everything works correctly! I would, obviously, prefer that they never saw the error.
Elasticsearch is definitely running and is responding to API calls correctly, and I can see the fusionauth_user index is present and populated with docs.
I guess my question is two fold:
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
I've search the docs for answers to the above but I can't seem to find anything :-(
Thanks for the kind feedback.
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
Elasticsearch provides full text search of user data. Each time a user is created or updated the user is re-indexed. In this case during login, we are updating the search index with the last login instant.
This service is required and cannot be disabled. We have had clients request to make this service optional for embedded applications or small scale scenarios where Elasticsearch may not be required. While this is not currently in plan, it is possible we may revisit this option in the future.
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
Not currently.
Full disclosure, I am not a Docker or Docker Swarm expert at all - perhaps there are some nuances to Swarm and response time due to spin up and spin down of resources?
Do you see any exceptions in the log when a user sees this error on the login?

Issue when playing DashCast live stream

I'm trying to capture desktop and stream it live in an Apache server using DashCast. It captures and plays correctly when I do it on demand, however when I do it live and then play with MP4Client it shows only a black screen, not even getting any error message while capturing it. The commands I’m using are:
DashCast -vf x11grab -vres 1280x720 -v :0.0 -npts -live -out /public_html/
And then I play with:
MP4Client http://localhost/vitor/dashcast.mpd
Which results in the following output:
MP4Client http://localhost/vitor/dashcast/dashcast.mpd
Using config file in /home/vitor directory
System info: 11948 MB RAM - 8 cores
Modules Found : 36
Loading GPAC Terminal
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
Terminal Loaded in 35 ms
Opening URL localhost/vitor/dashcast/dashcast.mpd
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
Service Connected
So what am I doing wrong? The client apparently connects correctly to the server, open the player but then it doesn't show anything on screen. I'm using Ubuntu 14.04 with GPAC version 0.5.0.
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
This message indicates that there is a difference ('slight' is a wrong word here given the actual difference !) between the UTC time indicated in the MPD in the availabilityStartTime attribute and the current time that MP4Client uses to compute which segment to fetch. This is only relevant for live because for on demand, all segments are assumed to be available all the time.
MP4Client uses different strategies to determine the 'current' time. The system time on the client may be different from the system time on the server, if they are using different NTP servers for instance. System time is not reliable. So MP4Client tries to get the time from the server. It first tries to use a specific HTTP "Server-UTC" header that the server may set. See for example this code. If this header is not set, it looks at the HTTP "Date" Header, even if it's not very precise. In your case, your HTTP server probably has a time configuration that does not match the system time. You can tell MP4Client to stop using the server information and to rely on its system time. Since you are using client and server on the same machine, that should work. The documentation of that option is here. For that, use:
MP4Client http://localhost/file.mpd -opt DASH:UseServerUTC=no
Alternatively, you can try to play the MPD locally without going through the web server.
MP4Client file.mpd
If that is not working, open an issue on GPAC's GitHub providing as much information as possible, in particular the result of MP4Box -version.

The request failed with HTTP status 417: Expectation failed

Without getting into much detailed code
I have an 'kiosk' application that is running in about 500-800 different 'kiosk' at about 50 locations. Very simple application that connects to internet via a Verizon MIFI (2-3 MIFI per location). We believe that Verizon has made some changes to the network and now randomly I get
The request failed with HTTP status 417: Expectation failed
I have viewed The request failed with HTTP status 417: Expectation Failed - Using Web Services
and FB Connect: (417) Expectation failed
But you see I already had used
System.Net.ServicePointManager.Expect100Continue = false
in my code.
So one of the issues I have is the application isn't easy to test, and it will fail for 20-30 minutes or several days, then clears itself up.
Changing the config to include
<system.net>
<settings>
<servicePointManager expect100Continue="false" />
</settings>
Would be a large task, I don't know it that would even fix it. Since it is random I'm having troubles because I typically can't get it to fail in my office at my desk more than 1 time.
I happen to use VB and .Net for the application and services that run with the 'kiosk'.
The issue seems to be with the config on the mifi and not the Verizon network itself. We recently switched APNs and when a mifi connects to the Verizon network it is supposed to update automatically. Sometimes the mifi will fail to update the APN setting and that is when we get this error message. There are two ways I have found to fix this issue. The first and easier is to log into the mifi and manually update the setting. If you are dealing with a user who is not tech savvy and walking them through logging into the mifi will not work you can call the Verizon wireless enterprise help desk and have them remove the feature set from the mifi, add the features back, and then pull the battery from the mifi and power cycle it, this will make the mifi request the configuration settings again.

RUN#Cloud consistently throws me out during a heavy operation

I'm using a large app instance to run a basic java web application (GWT + Spring). There's an expensive operation within my application (report) which takes a long time to execute.
I've tried running it with the cloudbees SDK on my local machine with similar settings as it would be on the cloud and it seems to function just fine. It runs in about 3-4 minutes.
On the cloud, it seems to be taking longer. The problem isn't the fact that it takes long. What happens in that cloudbees terminates the session after 5 minutes and gives me an error in my browser saying 'Unable to connect to server. Please contact your administrator'. A report which doesn't take as long runs just fine. My application has a session timeout of 30 minutes, so that isn't a problem either.
What could possibly be going wrong? Is it something to do with cloudbees?
This may be due to proxy buffering of your request through the routing layer (revproxy) - so it most likely isn't a session timeout - but the http connection getting cut.
You can either set proxyBuffering=false via the bees CLI command (eg when you deploy the app) - this will ensure longer running connections can work.
Ideally, however, you could change the app slightly to return to the browser with some token which you can poll with to get completion status, as even with a connection that lasts that long, over the internet it may provide a bad experience vs locally.