I am an iphone/ipad developer, using objective c language, and I am
using couchDB for my application.
My issue is: if I delete my local couchDB (local database) or run for the first time,
I am getting the error:
OTHER: {'EXIT',{error,timeout,#Ref<0.0.0.415>}}
This is my workflow:
my application replicates to remote iriscouch
using xyz:a...#mmm.iriscouch.com/
databasename.
credentials are checked.
if the replication is success, everything works as expected.
if I reset my local couch database contents, and if I iterate the
above step.
'sometimes' I will get an error(mentioned below) and there will be no
more synchronization with the remote. and its hard to re-sync the application.
This is that error from log:
[info] [<0.140.0>] 127.0.0.1 - - GET /_replicator/_changes? feed=continuous&heartbeat=300000&since=1 200
1> OTHER: {'EXIT',{error,timeout,#Ref<0.0.0.506>}}
1> OTHER: {'EXIT',{error,timeout,#Ref<0.0.0.507>}}
1>
waiting for your response
Krishna.
Seems like some validation functions running at destination end but in this case, this is the message returning from an erlang process tree timing out. But it needs to restart by itself after some (probably 5) seconds.
Related
We have a DotNetNuke site running on two servers that are load balanced. To ensure the files are in sync on these servers, we are using File Replication Service.
Search works fine on DotNetNuke when not load balanced, but in the load balanced setup the search stops working after a while (no suggestions, no results).
The following related exception is all over our log files:
[D:2][T:31][ERROR] DotNetNuke.Services.Exceptions.Exceptions - Lucene.Net.Store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#D:\Sites\SiteName\App_Data\Search\write.lock
at Lucene.Net.Store.Lock.Obtain(Int64 lockWaitTimeout)
at Lucene.Net.Index.IndexWriter.Init(Directory d, Analyzer a, Boolean create, IndexDeletionPolicy deletionPolicy, Int32 maxFieldLength, IndexingChain indexingChain, IndexCommit commit)
at Lucene.Net.Index.IndexWriter..ctor(Directory d, Analyzer a, MaxFieldLength mfl)
at DotNetNuke.Services.Search.Internals.LuceneControllerImpl.get_Writer()
at DotNetNuke.Services.Search.Internals.LuceneControllerImpl.Delete(Query query)
at DotNetNuke.Services.Search.Internals.InternalSearchControllerImpl.DeleteSearchDocumentInternal(SearchDocument searchDocument, Boolean autoCommit)
at DotNetNuke.Services.Search.Internals.InternalSearchControllerImpl.DeleteSearchDocumentsByModule(Int32 portalId, Int32 moduleId, Int32 moduleDefId)
at DotNetNuke.Services.Search.SearchDataStore.StoreSearchItems(SearchItemInfoCollection searchItems)
at DotNetNuke.Services.Search.SearchEngine.IndexContent()
at DotNetNuke.Services.Search.SearchEngineScheduler.DoWork()
My best guess is that the issue is caused because both servers are running their search functionality, and the File Replication Service is syncing the files which causes conflicts.
What would be the best way to solve this?
Add an exclusion rule to not replicate the search index folder, but let both servers keep running search?
Somehow disable one server from indexing?
Any other suggestions?
Installation details:
DNN v. 09.02.00 (366)
.NET Framework 4.6
There's a 'scheduler' tool inside of the settings section that contains all CRON/background jobs functionality.
One of the background jobs is the 'Search: Site Crawler' job which is responsible for indexing the website. When that job runs at the same time on both servers, unexpected conflicts occur. To prevent this from happening, you can configure the job to only run on a specified server using the 'Servers' setting.
After configuring the job to only run on one server, the issue did not come back and search still works on both servers.
Thanks #Sanjay for pointing me in the right direction.
If I remember correctly search is done via a scheduled task. Have you tried setting up the task to run on only one server and then use file replication to sync across to the other server.
I'm trying to capture desktop and stream it live in an Apache server using DashCast. It captures and plays correctly when I do it on demand, however when I do it live and then play with MP4Client it shows only a black screen, not even getting any error message while capturing it. The commands I’m using are:
DashCast -vf x11grab -vres 1280x720 -v :0.0 -npts -live -out /public_html/
And then I play with:
MP4Client http://localhost/vitor/dashcast.mpd
Which results in the following output:
MP4Client http://localhost/vitor/dashcast/dashcast.mpd
Using config file in /home/vitor directory
System info: 11948 MB RAM - 8 cores
Modules Found : 36
Loading GPAC Terminal
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
[Thread MediaManager] Couldn't set priority(2) for thread ID 0x9b55a700
Terminal Loaded in 35 ms
Opening URL localhost/vitor/dashcast/dashcast.mpd
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
Service Connected
So what am I doing wrong? The client apparently connects correctly to the server, open the player but then it doesn't show anything on screen. I'm using Ubuntu 14.04 with GPAC version 0.5.0.
[DASH] Slight drift in UTC clock at time 2015-08-05T12:38:45Z: diff AST - now 3563501 ms
This message indicates that there is a difference ('slight' is a wrong word here given the actual difference !) between the UTC time indicated in the MPD in the availabilityStartTime attribute and the current time that MP4Client uses to compute which segment to fetch. This is only relevant for live because for on demand, all segments are assumed to be available all the time.
MP4Client uses different strategies to determine the 'current' time. The system time on the client may be different from the system time on the server, if they are using different NTP servers for instance. System time is not reliable. So MP4Client tries to get the time from the server. It first tries to use a specific HTTP "Server-UTC" header that the server may set. See for example this code. If this header is not set, it looks at the HTTP "Date" Header, even if it's not very precise. In your case, your HTTP server probably has a time configuration that does not match the system time. You can tell MP4Client to stop using the server information and to rely on its system time. Since you are using client and server on the same machine, that should work. The documentation of that option is here. For that, use:
MP4Client http://localhost/file.mpd -opt DASH:UseServerUTC=no
Alternatively, you can try to play the MPD locally without going through the web server.
MP4Client file.mpd
If that is not working, open an issue on GPAC's GitHub providing as much information as possible, in particular the result of MP4Box -version.
I am having issues getting my remote server configured right after everything working properly on the local host. I am getting the following error:
InvocationTargetException:There was an error while invoking the operation. Check your operation inputs or server code and try invoking the operation again.
The key parts of the large message I am getting after that are:
Class "test" does not exist: Plugin by name 'Test' was not found in the registry; used paths::
..../smii/test/ID5D8FE3F-A1D1-4174-98B3-4BED10FD8FFEI79685B66-D792-E4E9-13B3-00004DA5951BI0F94D267-0704-3C89-0B5B-0000090BA097134950398900
I don't know how this last part of the address with random number and letters is getting there.
Also, when searching for my server settings today it seems to add a "-1" on the end of the initial directory on the server not sure why but I seem to be having major issues implementing the service remotely.
I am getting a 404 error when trying to push my database to Heroku via Taps
(1.9.2#[app_name]_db) heroku db:push --app [app_name]
Loaded Taps v0.3.24
Auto-detected local database: sqlite://db/development.sqlite3
Warning: Data in the app '[app-name]' will be overwritten and will not be recoverable.
! WARNING: Destructive Action
! This command will affect the app: [app-name]
! To proceed, type "[app-name]" or re-run this command with --confirm [app-name]
> [app-name]
Sending schema
Schema: 0% | | ETA: --:--:--
Saving session to push_201209251425.dat..
!!! Caught Server Exception
HTTP CODE: 404
The db:push command used to work fine, then I made some changes to my database by rolling back the migrations, editing them, and then re-migrating. Now I can deploy the app just fine, but the database will not push -- I don't know if this is related to editing the migrations or not.
The app works fine on my machine, and I wanted to eliminate any discrepancies between Heroku's copy and my own, so I created a new app and pushed to that. Same thing: the Heroku app works but will not receive db:push; it errors out with the same 404 above.
Is this a Heroku service temporarily down, or has changing my app caused the 404?
Edit: heroku logs do not show any error message
Heroku support was taking too long to respond, so I found a workaround that communicates with my EC2 instance directly by using the Taps gem.
Go to Heroku dashboard for your database. For me this was at
https://postgres.heroku.com/databases/[my-database-name]
though I navigated by going through Addons.
Click on 'URL' in 'Connection Settings', should give you something like
postgres://[username]:[password]#ec2-[ip_address_numbers].compute-1.amazonaws.com:[port]/[database_name]
Copy this value down, I'll reference it here as [EC2_URL]
Get Taps installed on 1.9.2 gemset if you don't already have it (not sure if 1.9.3 will work, didn't test it)
Set up localhost taps server to facilitate transaction by running in terminal:
taps server postgres://[local_machine_username]#localhost/[name_from_database.yml] [some_username] [some_password]
(note the spaces before username and password)
Then you can process the transaction yourself through another terminal window:
taps pull [EC2_URL] http://[some_username]:[some_password]#localhost:5000
It should run and pull all your data from the local development db to the Amazon instance. You can also do vice versa, or choose a different database, etc. Or not, I'm not a cop.
There are some problems with heroku db commands and ruby 1.9.2 (I have this version).
db:pull ends with "Unable to fetch tables information from"
db:push ends with "!!! Caught Server Exception HTTP CODE: 404"
There is a work-around for this problem. Switch to ruby 1.8.7 (I am using rvm for this) for a moment just to do db operations on heroku and after finish switch ruby back.
I do the same process (have Heroku convert my sqlite database to Postgres), and I was getting this problem yesterday as well. It seems to be working now, so I'm believe it was an issue with Heroku.
I'm using a large app instance to run a basic java web application (GWT + Spring). There's an expensive operation within my application (report) which takes a long time to execute.
I've tried running it with the cloudbees SDK on my local machine with similar settings as it would be on the cloud and it seems to function just fine. It runs in about 3-4 minutes.
On the cloud, it seems to be taking longer. The problem isn't the fact that it takes long. What happens in that cloudbees terminates the session after 5 minutes and gives me an error in my browser saying 'Unable to connect to server. Please contact your administrator'. A report which doesn't take as long runs just fine. My application has a session timeout of 30 minutes, so that isn't a problem either.
What could possibly be going wrong? Is it something to do with cloudbees?
This may be due to proxy buffering of your request through the routing layer (revproxy) - so it most likely isn't a session timeout - but the http connection getting cut.
You can either set proxyBuffering=false via the bees CLI command (eg when you deploy the app) - this will ensure longer running connections can work.
Ideally, however, you could change the app slightly to return to the browser with some token which you can poll with to get completion status, as even with a connection that lasts that long, over the internet it may provide a bad experience vs locally.