Aerospike docker - 100L, 'UDF: Execution Error 1 - aerospike

I deployed an Aerospike container using the official docker hub image. When I try to execute test_list = client.llist(key, 'test_list'), my Python client script returns the following error:
exception.UDFError: (100L, 'UDF: Execution Error 1', 'src/main/llist/llist_operations.c', 93)
I looked at the Aerospike logs and found that each time this code is executed, the error below gets printed:
: WARNING (udf): (src/main/mod_lua.c:599) Lua Create Error: module 'llist' not found:
no field package.preload['llist']
no file './llist.lua'
no file '/usr/local/share/luajit-2.0.3/llist.lua'
no file '/usr/local/share/lua/5.1/llist.lua'
no file '/usr/local/share/lua/5.1/llist/init.lua'
no file '/opt/aerospike/sys/udf/lua/llist.lua'
no file '/opt/aerospike/sys/udf/lua/external/llist.lua'
no file '/opt/aerospike/usr/udf/lua/llist.lua'
no file './llist.so'
no file '/usr/local/lib/lua/5.1/llist.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file '/opt/aerospike/sys/udf/lua/llist.so'
no file '/opt/aerospike/sys/udf/lua/external/llist.so'
no file '/opt/aerospike/usr/udf/lua/llist.so'
: INFO (udf): (udf.c:954) lua error, ret:1
I could not find the relevant lua files or a lua installation in the container. I have my code working fine when I run it directly on the host. Is there some extra configuration that needs to be done to the container?

LDTs were dropped in 3.15.
https://www.aerospike.com/docs/guide/ldt_guide.html
Excerpt:
Aerospike has removed the Large Data Type feature as of server version 3.15 after deprecating this functionality 12 months earlier. Please see the removal notice and deprecation notice. The features listed below are no longer in Aerospike servers.

Related

Cannot load ldap3 python module in ExecuteScript processor

I am trying to run a python script in a NiFi ExecuteScript processor. This script uses the ldap3 library from: https://pypi.org/project/ldap3/. I am aware that the processor runs Jython, and that I am unable to use compiled code, .so files, etc. but I noted that the library claims to be:
A strictly RFC 4510 conforming LDAP V3 pure Python client library
I have defined the path to the folder containing the library in the processor's PROPERTIES tab, using
Script Engine : python
Script File : /mnt/path_to_my_scripts/run.py
Script Body :
Module Directory : /mnt/path_to_my_libs
...where the ldap3 library folder is:
/mnt/path_to_my_libs/ldap3
When I start the processor I get the following error message:
16:17:32 GMT - server.my.domain:9091 - ERROR ExecuteScript[id=xxxx]
Failed to process session due to
org.apache.nifi.processor.exception.ProcessException:
javax.lang.NoClassDefFoundError:
org/scijava/jython/shaded/javax/xml/bind/DatatypeConverter in
at line number 5: javax.script.ScriptException:
javax.lang.NoClassDefFoundError: java.lang.NoClassDefFoundError:
org/scijava/jython/shaded/javax/xml/bind/DatatypeConverter in
at line number 5
Sure enough, line number 5 in the scipt is:
import ldap3
I have other scripts running successfully which do not use ldap3.

Setting up S3 logging in Airflow

This is driving me nuts.
I'm setting up airflow in a cloud environment. I have one server running the scheduler and the webserver and one server as a celery worker, and I'm using airflow 1.8.0.
Running jobs works fine. What refuses to work is logging.
I've set up the correct path in airflow.cfg on both servers:
remote_base_log_folder = s3://my-bucket/airflow_logs/
remote_log_conn_id = s3_logging_conn
I've set up s3_logging_conn in the airflow UI, with the access key and the secret key as described here.
I checked the connection using
s3 = airflow.hooks.S3Hook('s3_logging_conn')
s3.load_string('test','test',bucket_name='my-bucket')
This works on both servers. So the connection is properly set up. Yet all I get whenever I run a task is
*** Log file isn't local.
*** Fetching here: http://*******
*** Failed to fetch log file from worker.
*** Reading remote logs...
Could not read logs from s3://my-bucket/airflow_logs/my-dag/my-task/2018-02-15T21:46:47.577537
I tried manually uploading the log following the expected conventions and the webserver still can't pick it up - so the problem is on both ends. I'm at a loss at what to do, everything I've read so far tells me this should be working. I'm close to just installing the 1.9.0 which I hear changes logging and see if I'm more lucky.
UPDATE: I made a clean install of Airflow 1.9 and followed the specific instructions here.
Webserver won't even start now with the following error:
airflow.exceptions.AirflowConfigException: section/key [core/remote_logging] not found in config
There is an explicit reference to this section in this config template.
So I tried removing it and just loading the S3 handler without checking first and I got the following error message instead:
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/lib64/python3.6/logging/config.py", line 384, in resolve:
self.importer(used)
ModuleNotFoundError: No module named
'airflow.utils.log.logging_mixin.RedirectStdHandler';
'airflow.utils.log.logging_mixin' is not a package
I get the feeling that this shouldn't be this hard.
Any help would be much appreciated, cheers
Solved:
upgraded to 1.9
ran the steps described in this comment
added
[core]
remote_logging = True
to airflow.cfg
ran
pip install --upgrade airflow[log]
Everything's working fine now.

starting ignite cluster from command line

I am trying to start Ignite cluster from the command line on windows:
this is what I did:
Download Ignite binary version and kept it in C driver.
Set Environment Variable IGNITE_HOME to that folder location.
in command line I open the directory:
C:\apache-ignite-fabric-2.2.0-bin\bin
the from that directory :
C:\apache-ignite-fabric-2.2.0-bin\bin>sh ignite.sh examples/config/example-ignite.xml
I am getting the following error:
Failed to create Ignite component (consider adding ignite-spring module to classpath) [component=SPRING, cls=org.apache.ignite.internal.processors.spring.IgniteSpringProcessorImpl]
what can be the reason for this error?
found the solution for that:
need to run it in bat file and not sh file:
C:\apache-ignite-fabric-2.2.0-bin\bin>ignite.bat examples/config/example-ignite.xml
If you're on Windows I imagine you should try ignite.bat?
ignite.sh might have problems with classpath when run on Windows, that would explain it.

My Debian repository is throwing a "Hash Sum mismatch" error

We maintain a Debian repository for an app and all .deb files are stored on a s3 bucket.
We wrote a script to upload the files and update the Packages.gz file. All went fine until one of the developers found deb-s3 and tried using it.
After the first package upload we started getting this error message:
W: Failed to fetch s3://s3.amazonaws.com/myapp/dists/test/main/binary-amd64/Packages Hash Sum mismatch
I've tried to restore an old version of our Packages.gz file with no success. I've searched for this error and removing the /var/lib/apt/lists/ does not work either.
What would deb-s3 do that could break our entire repo?
Looks like deb-s3 creates a Releases file under dist/test and that conflicts with Packages.gz.
Removing the Release file restored our repository back to what it was.

WebDeploy runcommand issue

I'm trying to deploy a package with WebDeploy V3.
The installation process is to sync between a source folder to a destination folder on the remote computer and run a certain powershell script after the sync is done.
The command being executed is:
'"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" -verb:sync -source:dirPath='C:\source' -dest:dirPath='D:\destination',computerName=XXX -postSync:runcommand='powershell -inputformat none D:\destination\Install.ps1',successReturnCodes=0'
This yields the following error:
Info: Using ID '49edd786-d8a0-4acf-be7b-95dd6e1391cc' for connections to the remote server. Performing '-postSync'... Info:
Using ID '5ef9d005-82fa-4811-9f51-1741c8d622de' for connections to the remote server.
Info: Adding MSDeploy.runCommand (MSDeploy.runCommand).
Error: (11/28/2012 4:34:24 AM) An error occurred when the request was processed on the remote computer. Error: The entry type 'Unknown' was not expected at this time. The serialization stream may be corrupted.
Error count: 1.
Error during '-postSync'. Total changes: 0 (0 added, 0 deleted, 0 updated, 0 parameters changed, 0 bytes copied)
Searching the net for this error, I didn't see anybody who encountered it when using runcommand provider. If anybody encountered a similar issue and has ideas or suggestions I would be most thankful..
From what I've seen, using runCommand to execute an arbitrary command line might be a bit buggy. Try moving the commandline into a bat or cmd file and providing a (full?) path to that. The file will be uploaded and executed, as long as you don't try to pass in any arguments to it.
For future viewers of this post: I encountered this same specific error (Error: The entry type 'Unknown' was not expected at this time. The serialization stream may be corrupted) after adding runCommand provider usage to my MyProject.wpp.targets file for the Web Publishing Pipeline MSBuild process. The path was direct cmd shell input used in order to clear readonly flags with attrib -R.
In my case, my build server was configured with WebDeploy 3.0, while the server targeted by the deployment package was configured with Webdeploy 2.0. After upgrading the target server to Webdeploy 3.0 this particular problem was resolved.
However due to other errors surrounding runCommand (providing the correct path to the destination executable at package runtime) my solution still doesn't work entirely so take this all with a grain of salt.