websphere jython scripts cannot access AdminTask - scripting

We have an auto-deployment script that uses wsadmin and jython. The script appears to work as expected however after 6-7 redeployments the AdminTask object becomes unavilable, resulting in the following error when we attempt to use that object:
WASX7209I: Connected to process "server1" on node ukdlniqa41Node01 using SOAP connector; The type of process is: UnManagedProcess
WASX8011W: AdminTask object is not available.
...
Traceback (innermost last):
File "<string>", line 251, in ?
File "<string>", line 14, in main
File "<string>", line 38, in initialize
NameError: AdminTask
My question is, what would cause this AdminTask object to become unavailable? (it remains unavailable until we restart the server instance)

AdminTask may be available if one of the previous tasks does not finish properly. This happens often especially if your server is in development mode. I would suggest gathering Deployment Mustgather as per http://www-01.ibm.com/support/docview.wss?rs=180&context=SSCR4XA&q1=MustGatherDocument&uid=swg21199344&loc=en_US&cs=utf-8&lang=en and submitting the results to IBM.

Related

Program started on boot cannot access /run/user/1000

I have a question regarding an interesting error I'm getting from a Python (3) program being started by systemd. This is all happening on a Raspberry Pi Zero running a fully updated Raspberry Pi OS. It's the brain of a Google AIY Voice Kit v2, though that doesn't seem to be terribly important here.
The systemd service in question runs my Python program, which calls aiy.voice.tts.say("Example text"). However, this returns a FileNotFoundError - the full traceback is:
May 28 21:50:11 voicekit-zero autostart.sh[620]: Traceback (most recent call last):
May 28 21:50:11 voicekit-zero autostart.sh[620]: File "/home/pi/ready.py", line 27, in <module>
May 28 21:50:11 voicekit-zero autostart.sh[620]: """, volume=5)
May 28 21:50:11 voicekit-zero autostart.sh[620]: File "/home/pi/AIY-projects-python/src/aiy/voice/tts.py", line 52, in say
May 28 21:50:11 voicekit-zero autostart.sh[620]: with tempfile.NamedTemporaryFile(suffix='.wav', dir=RUN_DIR) as f:
May 28 21:50:11 voicekit-zero autostart.sh[620]: File "/usr/lib/python3.7/tempfile.py", line 686, in NamedTemporaryFile
May 28 21:50:11 voicekit-zero autostart.sh[620]: (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
May 28 21:50:11 voicekit-zero autostart.sh[620]: File "/usr/lib/python3.7/tempfile.py", line 397, in _mkstemp_inner
May 28 21:50:11 voicekit-zero autostart.sh[620]: fd = _os.open(file, flags, 0o600)
May 28 21:50:11 voicekit-zero autostart.sh[620]: FileNotFoundError: [Errno 2] No such file or directory: '/run/user/1000/tmponnl3w02.wav'
It's reasonably clear from this traceback that the TTS script writes a WAV file to a temporary location in /run/user/1000/, then uses it for playback. By that point, however, the file has become inaccessible. My best guess is that the filesystem isn't fully initialized yet. (I'm not certain of this, and I don't have all that much experience with systemd services, so I could definitely be wrong.)
The systemd service file specifies Wants and After for both network-online.target and systemd-timesyncd.service, though of course neither of those are directly related to filesystem readiness. Is there another service I can start after that will ensure the file system is ready for this call? If not, I can just wait a few seconds, though I'd prefer to build a more robust system that should work reliably.
Thanks!
This issue turned out to be easier than I'd anticipated, once I'd filled in some missing details.
First, this wasn't a problem with Python, or a delay in mounting the filesystem - it centered on the relevant user's (in this case, pi, which has UID 1000) login state at runtime. I discovered that if I logged in via SSH, the error in question disappeared.
Researching that led me to loginctl enable-linger:
Enable/disable user lingering for one or more users. If enabled for a specific user, a user manager is spawned for the user at boot and kept around after logouts. This allows users who are not logged in to run long-running services. Takes one or more user names or numeric UIDs as argument. If no argument is specified, enables/disables lingering for the user of the session of the caller.
Running this command is all that was required to solve the issue.

Chromedriver executable path problems even though executable path is in PATH [duplicate]

This question already has answers here:
selenium.common.exceptions.WebDriverException: Message: 'chromedriver' executable needs to be in PATH error with Headless Chrome
(1 answer)
WebDriverException: Message: 'chromedriver' executable needs to be in PATH while setting UserAgent through Selenium Chromedriver python
(1 answer)
Closed 2 years ago.
I'm working on a website automation program for 1 month, all went well and worked perfectly fine until I had to reset my laptop to factory settings due to a black screen that kept on going after turning on the laptop.
I'm running a laptop with windows 10, Pycharm is the IDE and I code with Python (64-bit version). I use chromedrivers, sadly there's not a 64-bit version but no problem was ever given by that.
Now, when I try to start the code I saved from the finished project, it just keeps saying
"'chromedriver.exe' executable needs to be in PATH."
I looked for everything I could, I copied the chromedriver into the PyCharm folder, I unzipped it on my Desktop, I used the executable_path=r'...' option, I tried to install the webdriver manager... nothing helped and the webdriver manager couldn't even get downloaded.
I think it's important to say that I have 2 mass storages, one called "Windows (C:)" and the other "Data (D:)", which has 380 GB of space, so I installed Pycharm and Python into Data(D:).
I'd really appreciate it if anyone could help me with this problem, it just makes me crazy how I tried everything fixing this.. seems like no one had ever had such a dumb problem.
Anyways, have a great day dear StackOverflow Community!
This is the error code:
C:\Users\User\PycharmProjects\Ersters\venv\Scripts\python.exe C:/Users/User/PycharmProjects/Ersters/Ersters.py
Automatic generator started, please wait.
Traceback (most recent call last):
File "C:\Users\User\PycharmProjects\Ersters\venv\lib\site-packages\selenium\webdriver\common\service.py", line 72, in start
self.process = subprocess.Popen(cmd, env=self.env,
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 854, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 1307, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the management file
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/User/PycharmProjects/Ersters/Ersters.py", line 15, in <module>
driver = webdriver.Chrome(options=options, executable_path=r'C:\Users\User\Desktop\Python\chromedriver.exe')
File "C:\Users\User\PycharmProjects\Ersters\venv\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 73, in __init__
self.service.start()
File "C:\Users\User\PycharmProjects\Ersters\venv\lib\site-packages\selenium\webdriver\common\service.py", line 81, in start
raise WebDriverException(
selenium.common.exceptions.WebDriverException: Message: 'chromedriver.exe' executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home
Process finished with exit code 1

What is causing my WLST scripts to time out when updating node manager credentials?

I'm using WebLogic 10.3.6 (it's old, I know) with Java6 and am trying to install in a new environment. I'm using WLST scripts that worked in other environments where the OS is the same (CentOS 5.11), SELinux is in permissive mode, and the application user has permission to write to the directory where all of the WLS stuff is saved. Each time I try to update anything in the SecurityConfiguration MBean, it get a timeout when trying to activate. Initially I thought it was just for the node manager credentials, but I tried to do
!> cmo.setClearTextCredentialAccessEnabled(true)
!> validate()
!> save()
!> activate()
Everything is fine until the activate... here's my results:
Activating all your changes, this may take a while ...
The edit lock associated with this edit session is released
once the activation is completed.
Traceback (innermost last):
File "<console>", line 1, in ?
File "<iostream>", line 376, in activate
File "<iostream>", line 1847, in raiseWLSTException
WLSTException: Error occured while performing activate : Error while Activating changes. : [DeploymentService:290053]Request with id '1,488,512,657,383' timed out on admin server.
Use dumpStack() to view the full stacktrace
What am I missing?
The real issue causing this problem was the NFS where my binaries were being shared across instances. The NFS was configured in a VERY non-standard way, causing all kinds of file read/write timeouts.

Zope: Weird "couldn't install" error

I get a weird error in my log-files when I start a Zope instance. The instance is running with a ZEO server and the Zope installation is a virtualenv (in /home/myUser/opt). I get this error with several Products, but Zope is working fine and these products are installed. Here is an example with the product BTreeFolder2:
2014-01-22T12:38:13 ERROR Application Couldn't install BTreeFolder2
Traceback (most recent call last):
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/Zope2-2.13.21-py2.7.egg/OFS/Application.py", line 693, in install_product
transaction.commit()
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/transaction-1.1.1-py2.7.egg/transaction/_manager.py", line 89, in commit
return self.get().commit()
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/transaction-1.1.1-py2.7.egg/transaction/_transaction.py", line 329, in commit
self._commitResources()
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/transaction-1.1.1-py2.7.egg/transaction/_transaction.py", line 446, in _commitResources
rm.tpc_vote(self)
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/ZODB3-3.10.5-py2.7-linux-x86_64.egg/ZODB/Connection.py", line 781, in tpc_vote
s = vote(transaction)
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/ZODB3-3.10.5-py2.7-linux-x86_64.egg/ZEO/ClientStorage.py", line 1098, in tpc_vote
return self._check_serials()
File "/home/myUser/opt/Zope2-2.13.21/local/lib/python2.7/site-packages/ZODB3-3.10.5-py2.7-linux-x86_64.egg/ZEO/ClientStorage.py", line 929, in _check_serials
raise s
ConflictError: database conflict error (oid 0x01, class OFS.Application.Application, serial this txn started with 0x03a449da3a7b1e44 2014-01-22 11:38:13.706468, serial currently committed 0x03a449da3af74dee 2014-01-22 11:38:13.820164)
I'd like to fix this, even if it does not affect the functionality of my site, but I don't know where to look. Any suggestions? :)
Or does this just mean that the cached objects have to be renewed with the data of the ZEO server?
When a Zope instances starts up, it registers extensions (Products) in the ZODB. When you run multiple instances sharing a ZEO server, however, and they all start up roughly the same time, you run into conflicts during this stage. The conflicts are essentially harmless, they only occur because another instance did succeed in installing the persistent components.
The solution is to configure instances not to register products, except for one (usually one per machine in a multi-machine cluster); then on restart, only one of the instances does the registration, the rest, running the same software stack, won't have to.
Note that in more recent Zope installations, the persistent parts for Products have been deprecated and mostly disabled; if you don't use Through-The-Web products or ZClasses, you probably don't need to enable this at all.
When you do need it, set the enable-product-installation configuration in zope.conf to on for just one instance, off for the rest. If you use buildout, and the plone.recipe.zope2instance recipe, you can specify this setting in the buildout recipe configuration.

App crashing after too many missed heartbeats

I have an app that is distributing load on a bunch of workers. So far all workers are running on the same VM, have not needed to scale up yet.
My problem is that, like every 3-4 days, the worker crashes with the error message below - no contact between the client and the rabbitmq server in 1200 secs (I guess).
Traceback (most recent call last):
File "/var/www/vhosts/niklas/workers/builder.py", line 170, in <module>
BuildWorker().main()
File "/var/www/vhosts/niklas/lib/worker.py", line 29, in main
self.msgs.ch.start_consuming()
File "/usr/local/lib/python2.6/dist-packages/pika/adapters/blocking_connection.py", line 722, in start_consuming
self.connection.process_data_events()
File "/usr/local/lib/python2.6/dist-packages/pika/adapters/blocking_connection.py", line 93, in process_data_events
self.process_timeouts()
File "/usr/local/lib/python2.6/dist-packages/pika/adapters/blocking_connection.py", line 99, in process_timeouts
self._call_timeout_method(self._timeouts.pop(timeout_id))
File "/usr/local/lib/python2.6/dist-packages/pika/adapters/blocking_connection.py", line 164, in _call_timeout_method
timeout_value['method']()
File "/usr/local/lib/python2.6/dist-packages/pika/heartbeat.py", line 85, in send_and_check
return self._close_connection()
File "/usr/local/lib/python2.6/dist-packages/pika/heartbeat.py", line 106, in _close_connection
HeartbeatChecker._STALE_CONNECTION % duration)
File "/usr/local/lib/python2.6/dist-packages/pika/adapters/blocking_connection.py", line 75, in close
self.process_data_events()
File "/usr/local/lib/python2.6/dist-packages/pika/adapters/blocking_connection.py", line 91, in process_data_events
self._handle_timeout()
File "/usr/local/lib/python2.6/dist-packages/pika/adapters/blocking_connection.py", line 198, in _handle_timeout
self._on_connection_closed(None, True)
File "/usr/local/lib/python2.6/dist-packages/pika/adapters/blocking_connection.py", line 235, in _on_connection_closed
raise exceptions.AMQPConnectionError(*self.closing)
pika.exceptions.AMQPConnectionError: (320, 'Too Many Missed Heartbeats, No reply in 1200 seconds')
My question is, what could possibly cause this?
This only happen to ~1 out of three workers, the others are running fine without any error message or warning (again, all workers and rabbitmq-server on the same VM).
I'm using the standard method in Python library pika, start_consuming(), to retrieve new requests. The code is way to big too attach here, and considering the error message, it seems to be out of my code or a system issue.
I'm using:
Python Pika 0.9.8
Rabbitmq 3.0.0
Debian 6.0
All workers are started inside screen
VM hosted at Linode, 512MB memory
We experienced a similar problem due to a bug (#236) in pika 0.9.8.
https://github.com/pika/pika/pull/236
This should be fixed in 0.9.9 or can be resolved by patching your pika library with the source code attached to the linked issue on github.
(Pika was closing a connection on 2 cumulative missed heartbeats rather than 2 consecutive ones).