What is causing my WLST scripts to time out when updating node manager credentials? - weblogic

I'm using WebLogic 10.3.6 (it's old, I know) with Java6 and am trying to install in a new environment. I'm using WLST scripts that worked in other environments where the OS is the same (CentOS 5.11), SELinux is in permissive mode, and the application user has permission to write to the directory where all of the WLS stuff is saved. Each time I try to update anything in the SecurityConfiguration MBean, it get a timeout when trying to activate. Initially I thought it was just for the node manager credentials, but I tried to do
!> cmo.setClearTextCredentialAccessEnabled(true)
!> validate()
!> save()
!> activate()
Everything is fine until the activate... here's my results:
Activating all your changes, this may take a while ...
The edit lock associated with this edit session is released
once the activation is completed.
Traceback (innermost last):
File "<console>", line 1, in ?
File "<iostream>", line 376, in activate
File "<iostream>", line 1847, in raiseWLSTException
WLSTException: Error occured while performing activate : Error while Activating changes. : [DeploymentService:290053]Request with id '1,488,512,657,383' timed out on admin server.
Use dumpStack() to view the full stacktrace
What am I missing?

The real issue causing this problem was the NFS where my binaries were being shared across instances. The NFS was configured in a VERY non-standard way, causing all kinds of file read/write timeouts.

Related

Redhat with httpd24 connecting to Informix using DBI

I'm at my wits' end on this. I have 2 RH7 boxes that I just installed httpd24 (v2.4.34) on. They were running httpd (v2.4.6) without any connection problems. Now when I try and run Perl scripts from the browser, they fail with...
install_driver(Informix) failed: Can't load '/usr/local/lib64/perl5/auto/DBD/Informix/Informix.so' for module DBD::Informix: libifsql.so: cannot open shared object file: No such file or directory at /usr/lib64/perl5/DynaLoader.pm line 190.
at (eval 5) line 3.
Compilation failed in require at (eval 5) line 3.
Perhaps a required shared library or dll isn't installed where expected
at /var/www/html/app/cgi-bin/test_informix_odbc.cgi line 35.
But when I run the same script from the command line, as 'apache', it runs just fine. All the ENV vars are set correctly.
Anyone run into anything similar before?
It would no longer use the LD_LIBRARY_PATH environment variable I was setting in httpd.conf.
Services are started in a fresh environment without any influence of user's environment (like environment variable values). As a consequence, information of all enabled collections will be lost during service start up.
Newer versions of httpd have stopped bringing the user environment in when the service is started. I found this little blurb in /opt/rh/httpd24/service-environment.
grep -r "LD_LIBRARY_PATH" /opt/rh/httpd24/
/opt/rh/httpd24/enable:export LD_LIBRARY_PATH=/opt/rh/httpd24/root/usr/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
I prepended the standard informix paths in /opt/rh/httpd24/enable.
export LD_LIBRARY_PATH=/opt/IBM/informix/lib:/opt/IBM/informix/lib/esql:/opt/rh/httpd24/root/usr/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
And everything is back to normal. Woohoo!

Setting up S3 logging in Airflow

This is driving me nuts.
I'm setting up airflow in a cloud environment. I have one server running the scheduler and the webserver and one server as a celery worker, and I'm using airflow 1.8.0.
Running jobs works fine. What refuses to work is logging.
I've set up the correct path in airflow.cfg on both servers:
remote_base_log_folder = s3://my-bucket/airflow_logs/
remote_log_conn_id = s3_logging_conn
I've set up s3_logging_conn in the airflow UI, with the access key and the secret key as described here.
I checked the connection using
s3 = airflow.hooks.S3Hook('s3_logging_conn')
s3.load_string('test','test',bucket_name='my-bucket')
This works on both servers. So the connection is properly set up. Yet all I get whenever I run a task is
*** Log file isn't local.
*** Fetching here: http://*******
*** Failed to fetch log file from worker.
*** Reading remote logs...
Could not read logs from s3://my-bucket/airflow_logs/my-dag/my-task/2018-02-15T21:46:47.577537
I tried manually uploading the log following the expected conventions and the webserver still can't pick it up - so the problem is on both ends. I'm at a loss at what to do, everything I've read so far tells me this should be working. I'm close to just installing the 1.9.0 which I hear changes logging and see if I'm more lucky.
UPDATE: I made a clean install of Airflow 1.9 and followed the specific instructions here.
Webserver won't even start now with the following error:
airflow.exceptions.AirflowConfigException: section/key [core/remote_logging] not found in config
There is an explicit reference to this section in this config template.
So I tried removing it and just loading the S3 handler without checking first and I got the following error message instead:
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/lib64/python3.6/logging/config.py", line 384, in resolve:
self.importer(used)
ModuleNotFoundError: No module named
'airflow.utils.log.logging_mixin.RedirectStdHandler';
'airflow.utils.log.logging_mixin' is not a package
I get the feeling that this shouldn't be this hard.
Any help would be much appreciated, cheers
Solved:
upgraded to 1.9
ran the steps described in this comment
added
[core]
remote_logging = True
to airflow.cfg
ran
pip install --upgrade airflow[log]
Everything's working fine now.

Zope: Weird "couldn't install" error

I get a weird error in my log-files when I start a Zope instance. The instance is running with a ZEO server and the Zope installation is a virtualenv (in /home/myUser/opt). I get this error with several Products, but Zope is working fine and these products are installed. Here is an example with the product BTreeFolder2:
2014-01-22T12:38:13 ERROR Application Couldn't install BTreeFolder2
Traceback (most recent call last):
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/Zope2-2.13.21-py2.7.egg/OFS/Application.py", line 693, in install_product
transaction.commit()
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/transaction-1.1.1-py2.7.egg/transaction/_manager.py", line 89, in commit
return self.get().commit()
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/transaction-1.1.1-py2.7.egg/transaction/_transaction.py", line 329, in commit
self._commitResources()
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/transaction-1.1.1-py2.7.egg/transaction/_transaction.py", line 446, in _commitResources
rm.tpc_vote(self)
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/ZODB3-3.10.5-py2.7-linux-x86_64.egg/ZODB/Connection.py", line 781, in tpc_vote
s = vote(transaction)
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/ZODB3-3.10.5-py2.7-linux-x86_64.egg/ZEO/ClientStorage.py", line 1098, in tpc_vote
return self._check_serials()
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/ZODB3-3.10.5-py2.7-linux-x86_64.egg/ZEO/ClientStorage.py", line 929, in _check_serials
raise s
ConflictError: database conflict error (oid 0x01, class OFS.Application.Application, serial this txn started with 0x03a449da3a7b1e44 2014-01-22 11:38:13.706468, serial currently committed 0x03a449da3af74dee 2014-01-22 11:38:13.820164)
I'd like to fix this, even if it does not affect the functionality of my site, but I don't know where to look. Any suggestions? :)
Or does this just mean that the cached objects have to be renewed with the data of the ZEO server?
When a Zope instances starts up, it registers extensions (Products) in the ZODB. When you run multiple instances sharing a ZEO server, however, and they all start up roughly the same time, you run into conflicts during this stage. The conflicts are essentially harmless, they only occur because another instance did succeed in installing the persistent components.
The solution is to configure instances not to register products, except for one (usually one per machine in a multi-machine cluster); then on restart, only one of the instances does the registration, the rest, running the same software stack, won't have to.
Note that in more recent Zope installations, the persistent parts for Products have been deprecated and mostly disabled; if you don't use Through-The-Web products or ZClasses, you probably don't need to enable this at all.
When you do need it, set the enable-product-installation configuration in zope.conf to on for just one instance, off for the rest. If you use buildout, and the plone.recipe.zope2instance recipe, you can specify this setting in the buildout recipe configuration.

Has anyone come across this php error before, Warning: imagejpeg()?

Warning: imagejpeg() [function.imagejpeg]: Unable to open '/home/SITENAME/public_html/files/cache/052f225905c1618003df0c5088aec7a9.jpg' for writing: Permission denied in /home/SITENAME/public_html/concrete/helpers/image.php on line 172
I emptied the cache directory and still no luck, and if I change the permissions on the cache folder then I get another error and I can't use the site at all:
Warning: require_once(Zend/Cache/Backend/File.php) [function.require-once]: failed to open stream: No such file or directory in /home/MYACCOUNT/public_html/concrete/libraries/3rdparty/Zend/Cache.php on line 133
Fatal error: require_once() [function.require]: Failed opening required 'Zend/Cache/Backend/File.php' (include_path='.:/usr/lib/php:/usr/local/lib/php:/home/owen/php') in /home/MYACCOUNT/public_html/concrete/libraries/3rdparty/Zend/Cache.php on line 133
I don't get it? I've never had this problem before.
Sounds like a permissions problem to me, but we can't tell from this end.
If you can FTP (or CD) into that /home/SITENAME/public_html/files/
and see if 'files' is owned by, and has the same permissions as public_html
Then see what permissions they NEED to have for your hosting setup.
Check that directory exists.
Check if web server daemon, most of the time - www-data, has write permissions to that particular directory.
For future reference the problem was the PHP handler. It has been changed to CGI mode (as opposed to DSO) and they turned suEXEC ‘off’ - might be useful for someone down the line.

websphere jython scripts cannot access AdminTask

We have an auto-deployment script that uses wsadmin and jython. The script appears to work as expected however after 6-7 redeployments the AdminTask object becomes unavilable, resulting in the following error when we attempt to use that object:
WASX7209I: Connected to process "server1" on node ukdlniqa41Node01 using SOAP connector; The type of process is: UnManagedProcess
WASX8011W: AdminTask object is not available.
...
Traceback (innermost last):
File "<string>", line 251, in ?
File "<string>", line 14, in main
File "<string>", line 38, in initialize
NameError: AdminTask
My question is, what would cause this AdminTask object to become unavailable? (it remains unavailable until we restart the server instance)
AdminTask may be available if one of the previous tasks does not finish properly. This happens often especially if your server is in development mode. I would suggest gathering Deployment Mustgather as per http://www-01.ibm.com/support/docview.wss?rs=180&context=SSCR4XA&q1=MustGatherDocument&uid=swg21199344&loc=en_US&cs=utf-8&lang=en and submitting the results to IBM.