Photon Server 3 - Did not reach license monitor #0 error - cross-platform

I've downloaded the free license - (Photon 3 - Free License (100 CCU, no expiry)) and replaced the current trial license (30 days limited) with it, then I try to start my photon server, but it doesn't work. I check the log file and find this, 'Did not reach license monitor #0'.

There are two solutions for this problem, with #1 being the most convenient:
1) Start with no license file (delete it from the directory): Photon will
run in 20CCU mode
2) Use the 30 day trial from the website again

Related

In Python Zope, how do I dump the error_log to the browser?

We are a big org and we use Python Zope. We have naturally two versions: prod and dev. In production I understand due to security reasons we should not show error log to end users, but how do I do that for dev? It is very cumbersome to check the error log manually every time I get an internal server error.
Can I dump the error log directly onto the browser?
Zope v. 4.6.2
Python v. 3.8.0b2 (default, Jul 9 2019, 16:47:40) [GCC 4.8.5]
As far as I remember, Zope is using Products.SiteErrorLog for logging errors.
Afair on startup, Zope created a SiteErrorLog on startup, which you could customize. Back then I customized it in such a way that admin accounts could view the traceback in the browser, both for staging and production environments.
On my local developer box, I started Zope in foreground mode, which directly printed all errors to my terminal, without the need to look in logs.
If you cannot manage to configure the error log, I would suggest to create an issue at https://github.com/zopefoundation/Products.SiteErrorLog or ask your question again at https://community.plone.org/ (tag: Zope) which is the most active Plone/Zope community online.

redis cluster continuously print log WSA_IO_PENDING

When I start up all the redis-server of the redis cluster, all these servers continuously print logs like WSA_IO_PENDING clusterWriteDone
[9956] 03 Feb 18:17:25.044 # WSA_IO_PENDING writing to socket fd --------------------------------------------------------
[9956] 03 Feb 18:17:25.062 # clusterWriteDone written 2520 fd 15----------------------------------------------------------‌​---
[9956] 03 Feb 18:17:25.545 # WSA_IO_PENDING writing to socket fd --------------------------------------------------------
[9956] 03 Feb 18:17:25.568 # WSA_IO_PENDING writing to socket fd -------------------------------------------------------- –
There is no way to specifically turn those "warnings" off in 3.2.x port of Redis for Windows as the logging statements use highest LL_WARNING level. This issue has been reported in my fork of that unmaintained MSOpenTech's repo (which I updated to Redis 4.0.2) and has been fixed by decreasing that level to LL_DEBUG. More details: https://github.com/tporadowski/redis/issues/14
This change will be included in the next release (4.0.2.3) or you can get the latest source code and build it for yourself.
Current releases can be found here: https://github.com/tporadowski/redis/releases
An issue was open in the official redis repo 10 months ago about that problem. Unfortunately it seems to be abandoned, and it hasn't been solved yet:
Redis cluster print "WSA_IO_PENDING writing to socket..." continuously, does it matter?
However, that issue may not be related to redis itself, but to the Windows Sockets API, as pointed out by Cy Rossignol in the comments. It's the winsock API that returns that status to the application, as seen in the documentation:
WSA_IO_PENDING (997)
Overlapped operations will complete later.
The application has
initiated an overlapped operation that cannot be completed
immediately. A completion indication will be given later when the
operation has been completed. Note that this error is returned by the
operating system, so the error number may change in future releases of
Windows.
Maybe it didn't get much attention because it's not a bug, although it's indeed an inconvenience that floods the system logs. In that case, you may not get help there.
Seems like there's no temporary fix. The Windows Redis fork is archived and I don't know if you could get any help there either.
Go on this location C:\Program Files\Redis
Open file redis.windows-service.conf in Notepad.
You will find a section like below:
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
# Specify the log file name. Also 'stdout' can be used to force
# Redis to log on the standard output.
logfile "Logs/redis_log.txt"
Here, you can change the value of loglevel as per your requirement. I think changing it to warning will solve this issue because it will log only essential errors.

Apache Axis Error - No Engine Configuration File

We have an application running on WebSphere 6.1 and every couple of days we get the following appearing in the logs:
org.apache.axis.ConfigurationException: No engine configuration file - aborting!
at org.apache.axis.configuration.FileProvider.configureEngine(FileProvider.java:175)
at org.apache.axis.AxisEngine.init(AxisEngine.java:172)
at org.apache.axis.AxisEngine.(AxisEngine.java:156)
at org.apache.axis.client.AxisClient.(AxisClient.java:52)
at org.apache.axis.client.Service.getAxisClient(Service.java:103)
at org.apache.axis.client.Service.(Service.java:112)
If we restart the JVMs we are fine for about 2-3 days before it kicks off again.
I have seen another question on this site which describes a similar issue and the answer for that was:
I solved this by copying the client_config.wsdd file to WEB-INF/classes folder. Axis did not complaint yet :)
However, as far as we know we don't have a specific client_config.wsdd file nor have we ever had to configure one. If something was missing we would always get this error but as I mentioned above it only happens every few days and a stop start of the JVMs resolves it for a bit.

NServiceBus License File for Dev Machine (keeps requesting)

I've been using NServiceBus successfully for I don't know how long. The license claimed to expire and informed me that I needed a new license file. So I went to the website and generated a new one (For a dev machine). Everytime I debug I get the same message and it requests the license file. Is there any way to prevent this message from showing up EVERY time I try to debug? (Like set a path programmatically possibly?)
<
Andreas : The ONLY mention of the license in the log file is as follows :
2013-03-05 14:24:23,983 [1] [INFO ] [NServiceBus.Licensing.LicenseManager] - No valid license found.
2013-03-05 14:24:23,986 [1] [DEBUG] [NServiceBus.Licensing.LicenseManager] - Trial for NServiceBus v3.3 has expired.
2013-03-05 14:24:23,988 [1] [WARN ] [NServiceBus.Licensing.LicenseManager] - Falling back to run in Basic1 license mode.
Here's a quick screen capture of the prompt after I select the new file. Just so you know it's SAYING it's a valid file.
I believe this can also happen if your license is for a different version of the software than you are running. You may need to request a license that aligns with your NSB version.
Once you received your free license, did you import it?
You need to click the "Browse..." button and select the license to import it!

Problems using BitTornado for file distribution

I'm experimenting with BitTornado-0.3.17 to distribute a file to several machines (*nix). I ran into couple of problems while doing so. Here is what I have done so far.
Download BitTornado-0.3.17.tar.gz
from
http://download2.bittornado.com/download/BitTornado-0.3.17.tar.gz
and untar it.
Created a torrent file and started tracker following instructions in the README file.
Started a seeder
./btdownloadheadless.py ../BitTornado-0.3.17.tar.gz.torrent --saveas ../BitTornado-0.3.17.tar.gz
saving: BitTornado-0.3.17.tar.gz (0.2 MB)
percent done: 0.0
time left: Download Succeeded!
download to: /home/srikanth/BitTornado-0.3.17.tar.gz
download rate:
upload rate: 0.0 kB/s
share rating: 0.000 (0.0 MB up / 0.0 MB down)
seed status: 0 seen recently, plus 0.000 distributed copies
peer status: 0 seen now, 0.0% done at 0.0 kB/s
Now we have a seeder. I start a peer on another machine to download BitTornado-0.3.17.tar.gz.
./btdownloadheadless.py BitTornado-0.3.17.tar.gz.torrent
At this point I do not observer my peer to download data from seeder. However if I kill my seeder and start again, the peer immediately downloads from the seeder. Why is it happening this way? The first time seeder reports tracker, tracker should be aware of the seeder and share that information to newly joined peer. Its happening only when I start seeder after peer joins the network.
Has anyone used BitTornado to distribute files programmatically (not using GUI tools at all.) ?
Thanks :-)
EDIT: Here is what happened a few days later. I dig into tracker logs and figure that seeder is binding itself onto a private ip address interface and reporting it. It is causing other clients to not reach seeder. hence no download. So I passed --ip options to it, which made it to report the tracker the machine's public ip address to which it bound. Even then for some reason i couldn't get client to download from seeder. However I got it working by starting client first and seeder last. This worked for me consistently. I can't think of any reason why it shouldn't work the other way. So, I'm starting clients first and then starting seeder.
All the symptoms indicate only one of your machines is able to connect to the other (in this case, the "seeder" machine). Restarting the "seeder" means it announces to the tracker and gets the other peers info, then connects. If the downloader is unconnectable, it simply cannot do anything until the seeder sees its IP.
This may be also related to rerequest_interval in download_bt1.py or reannounce_interval in track.py. Setting them to smaller values may help you debug if the tracker receives and distributes the right information.
When I diff BitTornado with twitter murder code, I found a little different.
Especially the at line 75 of Downloader.py file:
self.backlog = max(50, int(self.backlog * 0.075))
this will fix the bug, download uncomplete.