I have to check the connectivity to the portal which is behind the WebSeal Server. I need the response code of the actual server and WebSeal server code too. Please help me.
Try turning on debugging. Ideally, you are the only traffic on the server. Make sure to turn it on before you do anything that may be related to the problem (i.e. logging in) then wait at least 1 min after the problem occurs to let the logs catch up.
server task default-webseald-xxxx trace set pdweb.snoop 9 file path=c:/pdweb.snoop.txt,rollover_size=100000000
server task default-webseald-xxxx trace set pdweb.debug 9 file path=c:/pdweb.debug.txt,rollover_size=100000000
Make sure to turn it off afterwards as it will destroy your hard drives.
server task default-webseald-xxxx trace set pdweb.snoop 0
server task default-webseald-xxxx trace set pdweb.debug 0
WebSEAL constantly checks the backend servers within the junction. If you use pdadmin to do server task SERVER show /junction you can see if the backend server is online or offline.
I am assuming you mean HTTP response code. You can get that from WebSEAL by doing a simple HTTP/S get like you would any regular HTTP server. However, for the backend server, you will need to go around WebSEAL and connect to the backend directly to perform your checks.
If you are trying to setup monitoring, you can do one of two things. Either watch the msg__webseal-default.log for the following lines:
DPWWA2025W IBM Security Access Manager WebSEAL has lost contact with junction server:
Or you can use the server task command to show the junction and look for the Server State: being running or not running.
Related
Problem summary:
HTTP probes towards ADFS & WAP is not enough if the ADFS service is still running but the connection between ADFS and SQL database is dead.
ADFS Environment:
ADFS Environment
Using HTTP probes in Environment:
ADFS environment with HTTP Probes
HTTP probes:
The normal way of having health checks towards the ADFS environment, is to setup HTTP probes that runs HTTP checks towards each WAP & ADFS server URL or IP.
They run health checks over HTTP port 80. Gets a 200 (OK) returned.
The response to these probe endpoints is an HTTP 200 OK and is only checking the server/service locally, with no dependence on back-end services(SQL cluster\Database)
Conclusion:
Using HTTP probes towards ADFS & WAP servers is not enough
Problem description:
The HTTP port is going directly to the WAP and ADFS servers respectively.
This means that they only check if the servers & services themselves are OK.
There's a known problem where the connection between the ADFS backend
and the SQL server dies for 2-3 minutes. During this time,
the ADFS backend server times out, if you're unlucky.
The problem here is when the ADFS backend server times out,
the ADFS service
itself is still running.(so as far as the HTTP probe is concerned the ADFS is
still upp and running.) The HTTP probe is signalling that the
ADFS service is OK.
So the load balancer is till sending end users to the
ADFS service that has a dead connection towards the SQL database
because its service is still running.
End-users ends up getting error during authentication.
Question:
How can I setup a proper health check between ADFS --> SQL cluster/database?
So that you can see that communication between ADFS --> SQL does not work
as intended. As in the case when the service on the ADFS servers are still running, but the database connection between ADFS and SQL database is dead.
I would want that health check to be used for monitoring as a first stop. Secondary, you could build some recovery steps that could be executed thanks to this health check.
• You can create an ‘UDL’ file and mention the required database connection details in it along with the username and password of the account used to access the SQL Server DB instance mentioned in ADFS server. Please find the steps below to create an UDL file: -
Create a text file and change its extension to ‘.udl’ and open it.
In the providers tab, select the ‘Microsoft OLE DB Provider for SQL Server’ and on the connection tab, enter the server’s name, username and password, also check the box which states ‘allow saving password’.
Then select the database from the dropdown when initial connection for database existence is done through the file and hit the test connection button.
You will get a proper result whether the connection from that server is successful or not.
You can also specify the connection timeout in seconds along with the permissions to that database to be selected. Also, in the ‘All’ tab, you can select and set various parameters regarding this database connection check including saving the security information used for testing the DB connection.
Once done, you can then set a task scheduler through a script to be run periodically and check the status of the connection through this file by the following command: -
'Get-content C:\UDLs\Test.udl' --> after running this file
In this way, you can probe whether the connection between your ADFS and SQL servers is intact or not as a first stop towards health check.
Please refer the screenshots below for reference: -
https://learn.microsoft.com/en-us/sql/connect/oledb/help-topics/data-link-pages?view=sql-server-ver15
In the weblogic console, one of the nodes are not showing their health status while admin server and other nodes are showing there health status as "OK".
Is there any issue regarding communication between admin server and managed server or some other reason for this node not showing a health status?
What should be done?
I am able to get the status of managed server by restarting the managed server only.
it seems it is the problem due to unstability of the environment.there was problem with the start argument of the managed server.
Yes, that looks like a communication issue between Admin and Managed server. What you can usually do is:
Restart all. Then start Admin first and wait till its on. Then start the Managed server. Does the problem still persist?
Check that the Managed server is started properly. See that there are no errors in the log file.
Check the logs of the managed server. See if the managed server complains about not being able to connect to Admin, otherwise there should be a log message saying that the connection with the Admin was established.
You can also start Managed server without starting the Admin server, see if the Managed server is trying to contact the Admin server or not.
Check the config.xml file of the Admin and Managed server. Check the node and servers definitions, check their IP addresses, ports, etc.
I'm using Jmeter to load test my web application. I have two web servers and we are using HAProxy for load balance. All my tests are running fine and configured correctly. I have three jmeter remote clients so I can run my tests distributed. The problem I'm facing is that ALL my jmeter requests are only being processed by one of the web servers. For some reason it's not balancing and I'm having many time outs, and huge response times. I've looked around a lot for a way to make these requests being balanced, but I'm having no luck so far. Does anyone know what can be the cause of this behavior? Please let me know if you need to know anything about my environment first and I will provide the answers.
Check your haproxy configuration:
What is it's load balancing policy, if not round-robin is it based on ip source or some other info that might be common to your 3 remote servers?
Are you sure load balancing is working right? Try testing with browser first, if you can add some information about the web server in response to debug.
Check your test plan:
Are you sure you don't have somewhere in your requests a sessionid that is hardcoded?
How many threads did you configure?
In your Jmeter script by default the HTTP Request "Use KeepAlive" header option is checked.
Keep-Alive is a header that maintains a persistent connection between
client and server, preventing a connection from breaking
intermittently. Also known as HTTP keep-alive, it can be defined as a
method to allow the same TCP connection for HTTP communication instead
of opening a new connection for each new request.
This may cause all requests to go to the same server. Just uncheck the option and save, stop your script and re-run.
I am making silent installation for SQL Server Express 2005 using the following command
SECURITYMODE=SQL DISABLENETWORKPROTOCOLS=0 SAPWD="****" ADDLOCAL=SQL_Engine,SQL_Data_Files,SQL_Replication,Client_Components,Connectivity,SDK
I need to know is there a parameter or a command line utility to configure the service to listen to port 7005 (see picture)
http://www.databasejournal.com/img/2007/01/mak_CLT_image002.jpg
Also I need to create an alias using command line
Thanks
I'm currently trying to mess with this, here's what I've found:
To make Sql server listen on TCP at all, you need to configure it to do so. You can use sac.exe to load a configuration from a file. Go to a working sql server configured with TCP/IP enabled (and whatever else) and run "sac.exe out settings.txt". Now, on a computer you wish to enable tcp on, run "sac.exe in settings.txt" and then restart the service. sac.exe is in the "%programfiles%\Microsoft SQL Server\90\Shared" directory
To configure to a specific port, you'll have to edit the registry values. This will show you the keys to edit http://support.microsoft.com/kb/823938 however to script this, you'll need to use the "REG ADD" command in a .bat file and give the appropriate value to the appropriate key. FOr me, it's HKLM\Microsoft SQL Server\MSSQL.1\MSSQLServer\SuperSocketNetLib\Tcp\IPAll\TcpPort and set it to the port number (default for sql is 1433) and then set HKLM\Microsoft SQL Server\MSSQL.1\MSSQLServer\SuperSocketNetLib\Tcp\IPAll\TcpDynamicPorts to empty.
Restart the SQL Server Agent (Net Stop "Sql Server (SQLEXPRESS)" Net start "Sql Server (SQLEXPRESS)" on my machine)
Hope this helps you or someone else searching for this information (like me)
I know this is more a workaround than a proper solution, but instead of specifying (and then connecting to from your application) a certain TCP port, you can instead enable the SQL Browser service during the silent installation of SQL Server. This will enable your app to connect to the instance you're interested based on the Instance Name, not the TCP port.
It may have a performance penalty on creating a new connection, since an additional network round-trip is needed for the client to acquire the port for the Instance, but (a) for client-server apps with long-living connections it won't be a problem, and (b) for app-server apps with connection pooling, a well-configured pool won't be affected much (I think), or it may need slightly more connections to achieve the same performance.
An additional advantage is that you can have more than one SQL Server instance on the same host and not care about ports. For example, in your (and mine) silent installation scenario, you would have to first check whether the port you want is used by another instance. Using the SQL Browser removes the need for this logic in your installation.
If two web servers are configured in between a load balancer and a weblogic cluster, will the two Apache server maintain session stickiness?
Say for example, the load balancer forwards the first request to the 1st apache and in turn 1st apache forwards to 1st WL managed instance. Even if the second req from the same user is forwarded by the load balancer to the second apache, will the second apache be able to forward it to the 1st WLManaged instance which served the first request rather than the second WLManaged instance which is not aware of the session information at all.
What should ideally be the behaviour of the weblogic apache plugin? The catch is I don't want to enable session replication on the wl server cluster.
According to the section "Failover, Cookies, and HTTP Sessions" of the Apache HTTP Server Plug-In:
When a request contains session information stored in a cookie or in the POST data, or encoded in a URL, the session ID contains a reference to the specific server instance in which the session was originally established (called the primary server) and a reference to an additional server where the original session is replicated (called the secondary server). A request containing a cookie attempts to connect to the primary server. If that attempt fails, the request is routed to the secondary server. If both the primary and secondary servers fail, the session is lost and the plug-in attempts to make a fresh connection to another server in the dynamic cluster list. See Figure 3-1 Connection Failover.
Note: If the POST data is larger than 64K, the plug-in will not parse the POST data to obtain the session ID. Therefore, if you store the session ID in the POST data, the plug-in cannot route the request to the correct primary or secondary server, resulting in possible loss of session data.
Figure 3-1 Connection Failover
In other words, yes, both Apache servers will be able to forward an incoming request to the "right" WebLogic instance as the session ID contains all the required information for that. Note that there is no real need to confirm this with testing but it would very easy though.
UPDATE: Answering the following comment from the OP
I think this document stands good for only one apache server. In my case I have two and the load balancer forwards the requests to both the servers in a 50:50 manner. I did test this and the weblogic plugin is not maintaining the stickiness.
I understood you are using two apache fontend and I'm not sure this document applies to configuration with one apache server only. As explained, the session ID contains a reference of the primary server (and the secondary server as well) so both apache should be able to deal with it. At least, this is my understanding. Actually, I've worked with a similar configuration in the past but can't remember if things were working as I think they should or if the load balancer was configured to handle stickiness too (i.e. forward to a given Apache server). I have a little doubt now...
Could post your plugin configuration (of both apache server if they differ)? Could you also confirm that things are working as expected when only one apache server is up (and test this with both apache if their configuration differ, which shouldn't be the case though)?
When you have 2 Apache instances with a TCP load balancer in front, the stateflow diagram is not applicable anymore, because the Apache instances do not share their states.
I guess that the WebLogic plug-in maintains a state with a directional mapping [IPAddress+Port -> JVMID]. If it receives a cookie with a JVMID it does not know yet (for instance, it has never sent a request to this server yet), it has no way to know which IPAdress+Port it refers to, so it will not be able to reuse these JVMID and it will reassign new primary/secondary ones, which will be identical for 2 instances (maybe swapped), and which might be different if there are strictly more than 2 instances.
I did not confirm it by running specific tests, but on paper it seems not to work in all cases.
The answer is yes. We've got a write up of this on our blog http://blog.c2b2.co.uk/2012/10/basic-clustering-with-weblogic-12c-and.html which provides step by step instructions on setting up web session failover in a cluster.
Essentially the jsessionid cookie encodes the primary and secondary weblogic servers. Mod-wl parses the cookie and routes the request to the primary server. In your case Managed Server 1. If it is down it will automatically route the request to the backup server Managed Server 2.
The diagram above holds true for 2 Apache servers connected to the same WL cluster. The cookie session info contains details on what WLS to connect to and the plugin will respect that. If the primary (the server it originally connected to) WL server ins't available, then the request would be sent to the secondary server (designated such at the time of the first request based on the rules defined in selecting a "Preferred Replication Group"). This secondary server maintains the same session state as the primary WLS server and should be able to handle the request.
If session replication isn't setup (I think this is OFF by default), then there would be no session copied to another server and if the original/primary WL server goes down, you lose the session.
The answer is NO. As you have 2 Apache webserver, you need to implement stickiness at both hardware and software loadbalancer level in order to achieve your requirement.
Means you already have sticky session implemented in Weblogic plug-in for Apache level, but you also need Source IP based stickiness at the hardware loadbalancer level. This will allow your hardware loadbalancer to send the subsequent request from same user to same apace web server.