I have a pretty standard JDBC configuration for logstash, but suddenly it has stopped updating:sql_last_value.
Want to understand what could be the scenario when it can happen. There are also no error logs at all too.
I have looked the solution here and here
both are not applicable in my case
Related
I've installed FusionAuth (awesome product) into a Docker Swarm cluster using the official docker-compose.yml file and everything seems to work brilliantly.
EXCEPT
Periodically, when a user goes to login they will be presented with the above error stating that the search engine is not available. If they try again immediately then everything works correctly! I would, obviously, prefer that they never saw the error.
Elasticsearch is definitely running and is responding to API calls correctly, and I can see the fusionauth_user index is present and populated with docs.
I guess my question is two fold:
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
I've search the docs for answers to the above but I can't seem to find anything :-(
Thanks for the kind feedback.
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
Elasticsearch provides full text search of user data. Each time a user is created or updated the user is re-indexed. In this case during login, we are updating the search index with the last login instant.
This service is required and cannot be disabled. We have had clients request to make this service optional for embedded applications or small scale scenarios where Elasticsearch may not be required. While this is not currently in plan, it is possible we may revisit this option in the future.
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
Not currently.
Full disclosure, I am not a Docker or Docker Swarm expert at all - perhaps there are some nuances to Swarm and response time due to spin up and spin down of resources?
Do you see any exceptions in the log when a user sees this error on the login?
I'm wondering if there is some way in ModSecurity Apache2 module (version 2.9.1) how to log error messages into log file specified by the SecDebugLog option but don't duplicate them into the standard Apache error log file?
According to ModSecurity documentation the error messages are always doubled in both log files: Messages with levels 1–3 are designed to be meaningful, and are copied to the Apache’s error log. But I'd like to keep the ModSecurity stuff separate and don't mess the standard error log.
You can remove log from any of the Rules and just leave auditlog.
If using the OWASP CRS then change the default action from this:
SecDefaultAction "phase:1,deny,log"
SecDefaultAction "phase:2,deny,log"
to this:
SecDefaultAction "phase:1,deny,nolog,auditlog"
SecDefaultAction "phase:2,deny,nolog,auditlog"
Which will turn off all logging, but then turn on auditlogging again.
You may also want to add similar for phase 3 and 4 depending on whether you are also checkout outbound traffic.
However I would really, really, really caution against this for a number of reasons:
You will eventually block something with a ModSecurity rule and wonder why it's happening and skip over the Audit log and blame Apache. Trust me. "Why is this request returning 403 when I can see the page exists?!?!" At least if in the error log then you've another chance to see why this is so.
The entry in the error log is in one line. This makes it much easier to collect, parse and deal with errors in tools like Splunk. The audit log is spread over several lines so is less machine readable. And you should be reviewing your WAF logs regularly and not just assuming it's working correctly and only look at logs when something goes wrong. Maybe not in detail at each log level but in summary. Ivan Ristic, the original creator of ModSecurity, recently tweeted:
"If you’re not using your WAF as an IDS, you’re doing it wrong."
These are errors. And the error log is therefore the right place for them. The audit log is then a useful place to get extra detail if you cannot explain the errors.
After redeploying a worklight application, some configuration for analytics got lost and I'm trying to configure worklight with analytics again.
The dashboard shows "No data available" for time after the deployment although there are old records displayed for the time before the deployment of the application. So the db was not affected.
I set the wl.analytics.logs.forward property to "true" in worklight.properties;
also I set the wl.analytics.url of the db to be something like:
https://myserver:port/analytics/data
The dashboard is on
https://myserver:port/analytics/console
That is the URL for the analytics server.
Although if I put the db URL in a browser I get something like:
Error 404: java.io.FileNotFoundException: SRVE0190E: File not found: /data
Checked SystemOut.log and SystemErr.log (WAS logs) and I did not see errors there.
Does anybody know which is the XML I need to check in order to validate the configuration is OK for analytics? How could I troubleshoot this problem? Are there other logs I could check?
In the list of environment variables you gave I do not see any for username and password. Try to set:
wl.analytics.password=admin
wl.analytics.username=admin
It would be useful to see a wireshark trace, maybe you are not getting 403s. The Analytics data uploader generally has a small bit of protections and you have the option to keep or remove it.
#patbarron is correct about the multiple WAR files though. You need to send your analytics data to the /analytics-service context. The WAR analytics-service is the WAR that handles all the data processing, querying, etc. The other WAR analytics just handles the console UI.
When testing it might be beneficial to lower the
wl.analytics.queue and wl.analytics.queue.size, those values are for collecting data on the MobileFirst runtime server. Data is collected at the runtime server then sent to the analytics server. The larger these values are generally, the longer it will take to send. There are good to set for production
Im having major issues with Mule 3 and files being read and that later should be put on some standard queue on ActiveMQ.
basically its a really simple service, initially started that on inbound starts off by
This file is read correctly from the SFTP area, and in the mule log for the reading application its stated that the file is written to the specified archiveDir..
After this, its silent, nothing else happens... the file is just placed in the archiveDir and neithe ActiveMQ or Mule3 gives any indications to that something have gone wrong...
The queue names etc etc is all correct.
Basically the same environment is running on a second server, with no disturbance..
Is there any commonly known issues that could make mule not continue on with its processing putting the file on queue?
Thx in advance!
Okay I have seen some very similar questions here but none seem to be answered to my liking. I have created a Silverlight application that calls a couple of services to populate various comboboxes from the database. I got this working without too much trouble on my local machine.
So now I want to deploy it to our webserver. It was relatively straight forward to get ISS7 to load the Silverlight application. However, none of my services seem to be working properly, in that the comboboxes are empty. In IE I get the following error:
Message: Unhandled Error in Silverlight Application An exception occurred during the operation, making the result invalid. Check InnerException for exception details. at System.ComponentModel.AsyncCompletedEventArgs.RaiseExceptionIfNecessary()
at MyTestPage.ViewModel.MyService.GetInfoCompletedEventArgs.get_Result()
at MyTestPage.ViewModel.MainPageViewModel.b__2(Object s, GetInfoCompletedEventArgs ea)
at MyTestPage.ViewModel.MyService.MyServiceClient.OnGetInfoCompleted(Object state)
Line: 1
Char: 1
Code: 0
URI: http://www.mywebsite.com/MyTestPage.aspx
My problem is that this error only occurs when deploying on the webserver and I have no clue how to debug this problem. The error says to check the InnerException but I haven't found an answer yet (after hours of searching) that tells me how I should do this.
I have tried browsing to the services and I am able to do so using the domain name i.e. http://test.myserver.com/Services/MyService.svc. However when logged onto the server and using http://localhost:3456/Services/MyService.svc - which is the path in the ServicesReferences.ClientConfig file - It cannot be found.
Some answers here seem to suggest using a clientaccesspolicy.xml file but I don't understand why this should be necessary if the services are hosted on the same server as the application - they aren't required when debugging on my local machine. Despite my reservations I have tried adding a clientaccesspolicy.xml file to the root of the application but this still doesn't make any difference.
So I have a couple of questions:
1) How do I get access to the InnerException when I am running the application on the webserver? Is there a specific log file I can view or turn on?
2) If, for some reason, I am trying to access the service in a cross domain fashion (even though they are located on the same server) how do I configure the application so that this isn't required?
UPDATE:
Ok, I was able to get the tracing to work. I can now see the trace details on the page when it loads but it doesn't really tell me anything useful. I have also added the option to write the details to the disk. Initially this file wasn't being written and I couldn't understand why. Then I noticed that refreshing my silverlight application was not triggering a write to the log. It was only when I manually browsed to the services that the log file was updated. This seems to indicate to me that my silverlight application is not hitting the services at all (for some reason). I tried cutting out the View Model object and hitting the service directly from the xaml code behind file but this didn't make any difference either.
At this point after spending more than two days trying to figure this out, I am thinking about starting again from scratch.
For my mind it shouldn't be this difficult to deploy something that works on a development machine to a webserver.
I pretty much gave up on my initial approach. I had another go following along from this video http://www.silverlight.net/learn/videos/all/net-ria-services-intro/. It uses Domain services instead of the WCF Services and it was actually fairly straight forward to get it going on the webserver. The example is two years old now so maybe there are better ways to do this now (I am open to suggestions) but at least it worked within an hour of trying it (compared to 2.5 days and getting nowhere).