Webglogic when does it change server Health - weblogic

so on the left hand side of weblogic console panel...there are 5 different health status of the servers. I see weblogic change them status to warning whenever there is a stuck thread. However, when one does in wlst script.
connection(name,pass,url)
serverRuntime()
get('HealthState') the server health returns Health_Ok even though on the panel on the left hand side the health status is warning.
Any tips are appreciated
I tried serverLifeCycleRuntimes() by the way. I googled every website regarding Weblogic and not being able to figure out. Thanks

Well I kinda figure it out. It has subsystems. What I do is I cd into JDBC, check the health, then cd into ThreadPool check the health. When the server changes to overload? I do not know

Related

"The search engine appears to be down or failing to respond to the search query"

I've installed FusionAuth (awesome product) into a Docker Swarm cluster using the official docker-compose.yml file and everything seems to work brilliantly.
EXCEPT
Periodically, when a user goes to login they will be presented with the above error stating that the search engine is not available. If they try again immediately then everything works correctly! I would, obviously, prefer that they never saw the error.
Elasticsearch is definitely running and is responding to API calls correctly, and I can see the fusionauth_user index is present and populated with docs.
I guess my question is two fold:
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
I've search the docs for answers to the above but I can't seem to find anything :-(
Thanks for the kind feedback.
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
Elasticsearch provides full text search of user data. Each time a user is created or updated the user is re-indexed. In this case during login, we are updating the search index with the last login instant.
This service is required and cannot be disabled. We have had clients request to make this service optional for embedded applications or small scale scenarios where Elasticsearch may not be required. While this is not currently in plan, it is possible we may revisit this option in the future.
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
Not currently.
Full disclosure, I am not a Docker or Docker Swarm expert at all - perhaps there are some nuances to Swarm and response time due to spin up and spin down of resources?
Do you see any exceptions in the log when a user sees this error on the login?

Unable to upgrade MID Server

Our ServiceNOW instance recently got upgraded from Eureka to Geneva.
MID Server status became down and the version does not match the Build number in the stats.
I was told that the MID Server will auto upgrade which didn't happened. I am using a proxy server to connect, when I tried communicating with the instance through the server it is working however I assume the service or application installed is not able to communicate with the instance.
I created a new service and it worked perfectly for 1 min however after that the status of it became down.
Then I tried changing the configurations of old services however no good.
Now the services are running fine however in the config file in the mid_sys_id section mid server details not getting populated and the status of the MID Server in the instance is always down.
Do I need to change any properties in the instance?
Since I use proxy server do I need to remove the comment line of the auto upgrade through proxy section or can I leave that blank?
What is the issue here? Why my MID Server is not getting upgraded? Kindly help me.. If possible an explanation with screen shot would be much appreciated.
If you're on Geneva, you should be able to simply go to your MIDServer listing, from inside your instance, click on a MIDServer, and find 'Upgrade MID' in the Related links section. That should start an upgrade for that MID Server. If it fails, there should be an entry in either the ServiceNow logs or the MID Server logs with more information why it failed.

How to track down long running calls to IIS?

Our users are restless. They keep complaining about woolly, unmeasurable stuff, particularly slowness, without giving specifics, which of course makes it very difficult to track down.
Nonetheless, it is quite possible that they are right, that there are server calls that are taking way too long to come back. So I want to put some kind of sniffer on the web site (we're using ASP.NET MVC 4 on IIS7) that will log any call that takes more than n seconds to turn around, or that returns more than x megabytes of data, along with all request parameters, the response size, and maybe a certain amount of response data.
I haven't a clue how to do this, though. Any suggestions?
here is my take on this:
FRT
While you can use failed request tracing to log slow requests, in my experience is more useful for finding out why a request fails before it hits your application, rather than why its running slowly. 9/10 times its going to simply show you that the slowdown is in your code somewhere.
Log Parser
Yes you can download and analyze iis logs. I use Log Parser Lizard to do the analysis - its a great gui over log parser. Here's a sample of how you might query slow requests over 1000ms:
SELECT
To_String(To_timestamp(date, time), 'dd/MM/yyyy hh:mm:ss') As Time,
cs-uri-stem, cs-uri-query, cs-method, time-taken, cs-bytes, sc-status
FROM
'C:\inetpub\logs\LogFiles\W3SVC1\u_ex140721.log'
WHERE
time-taken > 1000
ORDER BY time-taken desc
New Relic
My recommendation - go easy on yourself and sign up for a free trial. No I don't work for them, but I've used their APM product a lot. Install the agent on the server - set it up. In 10 mins you will be amazed at the data you see about the site. Trust me.
Its designed to work in production environments and gives you amazing depth of info on what's running slow, down to the database query and stack traces. Its pure awesome. Once its setup wait for the next user complaint, log in and look at traces for the time frame.
When your pro trial ends, you can still get valuable data on the free tier, but it will only keep last 24 hours. We purchased licenses -expensive yes, but worth every cent. Why? Time taken to identify root causes was reduced by an order of magnitude, we can get proactive by looking at what is number 2, 3 and 4 on the slow requests list and working those before they become big problems, and finally the alerting makes us much more responsive when things were going wrong.
Code it
You could roll you own. This blog uses Mvc ActionFilters to do the logging. You could also use an HttpModule similar to this post. The nice thing about this approach is you can compile and implement the module separately from your application, and then just drop in the dll and update web.config to wire up the module. I would be wary of these approaches for a very busy site. Also, getting the right level of detail to fully identify the root is challenging.
View Requests
As touched on by Appleman1234, IIS has a little known feature to look at requests currently executing. Its handy for the 'hey its running slow right now' situation. You can use appcmd.exe or the IIS gui to do it. You will need to install the 'Request Monitor' IIS feature for this to work. This approach is ok for rudimentary narrowing of the problem, but does not show you whats running slowly in your controller.
There are various ways you can do this:
Failed Requests Tracing(FRT) – formerly known as Failed Request Event Buffering (FREB) with custom failure condition of takes over a certain time to load / run
Logging request information with IIS logging functionality and then using a tool like LogParserStudio
Using tools like Fiddler or IISMonitor on the IIS server to capture request information
For FRT the official documentation is available here and information how to capture dumps for long running process is avaliable here
For logging request information in IIS information about log file analysis is located here
For information on configuring Fiddler to capture IIS requests find information here
A summary of the steps in the linked resources is provided below.
For FRT
From IIS Manager for a given site,In the Actions pane, under Configure, click Failed Request Tracing and enter desired values in dialog box to enable Failed Request Tracing.
From IIS Manager for a given site, under IIS click Failed Request Tracing Rules, in order to define rules of failure for a given request. In the Actions pane, click Add and follow the wizard.
The logs will go in the directory you specify and are viewable in a web broswer.
For IIS logging
Logging is enabled by default on IIS
From IIS Manager for a given site,under IIS click Logging, and in the Actions Pane, click Enable to enable logging if it isn't already.
From IIS Manager for a given site,under IIS click Logging, and then configure as desired and click apply.
Install LogParser, .Net 4.x and LogParserStudio (if you need additional steps see here
Open LogParserStudio and add logs to it, you then can use SQL queries to get information from the log files.
For Fiddler
You need to change the user that IIS runs as to a user that can launch applications, like Fiddler (instead of Network Service), and then launch Fiddler with that user.
Also see Monitor Activity on a Web Server (IIS 7) for further information.

weblogic em console's Admin server/managed servers status & metrics are unavailable

Weblogic em console shows "status pending" for the admin servers and managed servers, even though when they are up. It is not showing other metrics as well.
Below are the head error messages.
oracle.sysman.emas.sdk.model.metric.MetricsUnavailableException
at oracle.sysman.emas.sdk.model.metric.WLMetricProvider.getMetricServiceMBean(WLMetricProvider.java:313)
at oracle.sysman.emas.sdk.model.metric.WLMetricProvider.queryMetricTable(WLMetricProvider.java:364)
at oracle.sysman.emas.sdk.model.metric.WLMetricProvider.getMetrics(WLMetricProvider.java:155)
at oracle.sysman.emas.sdk.model.metric.MetricUtil.getMetrics(MetricUtil.java:271)
I would suggest opening a ticket with Oracle. You could try enabling Platform MBean Server which is in Domain home -> Advanced and restarting the domain. Usually this would fix it but without logs, its very hard to tell. Please try to provide us the sysman logs for the em.
I found the solution for this issue. This is a bug and oracle has released a patch for this BUG_13826887/Patch p13826887_111160_Generic.
1.Set ORACLE_HOME, java
2.check which opatch, opatch apply
3.restart complete weblogic

Silverlight wcf connection error

I'm about a month away developing my silverlight application (this is my first). Everything went rather smoothly until today, when out of the blue I started getting this message:
An error occurred while trying to make a request to URI 'http://localhost:2682/Services/Authentication/LoginService.svc'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details.
I'm using WCF Services and this issue never appeared until now.
I've added a clientdomain.xml and clientaccesspolicy.xml file to my [projectname].web folder, and re-wrote them about a 1000 different ways.
I've also used Fiddler and it shows me that the error is on both those files, the error is
[Fiddler] The socket connection to localhost failed. ErrorCode: 10061. No connection could be made because the target machine actively refused it 127.0.0.1:2682
I've searched the error "10061" and it has to do with socket definition. But I couldn't find any solution to that.
Don't know if it has anything to do with it, but my "ASP.net Development Server" port is 6939.
Keep in mind that the app has NOT been deployed, so this is only happening locally. I'm using MS VS 2010 and MS SQL Server 2008.
Am I doing anything wrong or is this a silverlight issue??
On a last note, I haven't changed anything on Port, socket or service configuration. Last thing I was doing was editing a XAML file on client side and and the app started throwing me this error.
Need help, can't do anything until this is solved!!!!
Thanks.
i think you are using you app on localhost and a dynamic port is getting assigned and this port is not fixed and every run and that causes the refuse problem. if you want to fix this, create a solid url for example,
http://localhost/apps/Services/Authentication/LoginService.svc
Well, last night, just before I went to bed, I noticed something odd. In my "ServiceReferences.ClientConfig" file, the endpoint ports for each one of my services where diferent from the ones the silverlight machine used, so going on a hunch (and because I was reaching my sanity breakpoint) I decided to eliminate all my Service References and re-add them again.
I worked... go figure. Still don't know why this happened and if anyone could shed some light on the subject, I would appreciate it. It's kinda of annoying having to re-add all my services references. Right now I have only 6 of them, but in the near future they may go over 20, and if this happens again... well, it's going to be a real pain...
Thanks