MonetDB remote table "unexpected end of file" - sql

I'm trying to configure a remote table on a second node but receiving "unexpected end of file" every time I try to do a select statement.
Configuration Node one:
can be accessed from anywhere to 0.0.0.0 on /var/local/dbfarm
IP: XXX.XXX.XXX.128
PORT: 50000
MONETDB Version: MonetDB Database Server Toolkit v11.39.17 (Oct2020-SP5)
db: xrouterlab
table: aufXXX ("oid" BIGINT)
Configuration Node two:
IP: XXX.XXX.XXX.126
PORT: 50000
MONETDB Version: MonetDB Database Server Toolkit v11.43.5 (Jan2022)
db: xrouterlab
CREATE REMOTE TABLE aufXXX (
"oid" BIGINT
) on 'mapi:monetdb://XXX.XXX.XXX.128:50000/xrouterlab';
operation successful
sql>select * from aufXXX limit 1;
unexpected end of file
Logs Node 1:
2022-03-16 11:33:32 MSG merovingian[3342]: proxying client XXX.XXX.XXX.126:47446 for database 'xrouterlab' to mapi:monetdb:///var/local/dbfarm/xrouterlab/.mapi.sock?database=xrouterlab
2022-03-16 11:33:32 MSG merovingian[3342]: target connection is on local UNIX domain socket, passing on filedescriptor instead of proxying
Logs Node 2:
2022-03-16 11:33:34 MSG merovingian[1703240]: database 'xrouterlab' (-1) was killed by signal SIGSEGV
Tks in advance for any help.
Diego

Related

Unable to write from spark pool to sql pool in Azure Synapse

I have a table in the default spark pool which i need to load into the dedicated sql pool in the azure synapse. Below are the codes I implemented, however it is not loading.
%%pyspark
spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")
new_df = spark.createDataFrame(segmentation_output)
new_df.write.mode("overwrite").saveAsTable("default.segmentation_output")
%%pyspark
new_df.createOrReplaceTempView("pysparkdftemptable")
%%spark
val scala_df = spark.sqlContext.sql ("select * from pysparkdftemptable")
scala_df.write.synapsesql("eana.bi.xim_CustomerSegment", Constants.INTERNAL)
Error : StructuredStream-spark package version: 2.4.5-1.3.1
StructuredStream-spark package version: 2.4.5-1.3.1
StructuredStream-spark package version: 2.4.5-1.3.1
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host eec8e890e9d5--0.tr624.northeurope1-a.worker.database.windows.net (redirected from emna-dv-ibsanalytics-wco-id-euno-sqs.database.windows.net), port 11030 has failed. Error: "connect timed out. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".

Error while running query on Impala with Superset

I'm trying to connect impala to superset, and when I test the connection prints: "Seems OK!", and when I try to see databases on impala with the SQL Editor in the left side it shows all databases without problems.
Preview of Databases/Tables
But when i write a query and click on "Run Query", it gives the error: "Could not start SASL: b'Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Ticket expired)'"
Error running query
I'm running superset with SSL and in production mode (with Gunicorn) and Impala with SSL in a Kerberized Hadoop Cluster, and my impala database config is:
Impala Config
And in the extras I put:
{
"metadata_params": {},
"engine_params": {
"connect_args": {
"port": 21050,
"use_ssl": "True",
"ca_cert": "path/to/my/ca_cert.pem",
"auth_mechanism": "GSSAPI"
}
},
"metadata_cache_timeout": {},
"schemas_allowed_for_csv_upload": []
}
How can I solve this error? In my superset log it only shows:
Triggering query_id: 65
INFO:superset.views.core:Triggering query_id: 65
Query 65: Running query on a Celery worker
INFO:superset.views.core:Query 65: Running query on a Celery worker
Versions: Superset 0.36.0, Impyla 0.16.2
I was able to fix this error doing this steps:
1 - Created service user for celery-worker, created a kerberos ticket for him and created a crontab to renew the ticket.
2 - Runned celery worker from this service user, instead running from root.
3 - Killed an celery-worker that was running in another machine of my cluster
4 - Restarted Impala and Superset
I think this error ocurred because in some queries instead of use the celery worker in my superset machine, it was using the celery worker that was in another machine without a valid kerberos ticket. I could fix this error because when I was reading celery-worker log , it showed that a connection with the celery worker in other machine failed in a query running.

Node not starting after creating a new node in rabbitmq

I want to create a cluster of 3 nodes. I have created two nodes with command:
RABBITMQ_NODE_PORT=5680 RABBITMQ_NODENAME=rabbit1#localhost rabbitmq-server -detached
Now when i try to stop the node in order to join it to cluster, it gives me error stating the node is not started at all.
What i have done till now is installed rabbitmq and started it using rabbitmq-server.
rabbit1#localhost.log
Error description:
init:do_boot/3
init:start_em/1
rabbit:start_it/1 line 480
rabbit:broker_start/0 line 356
rabbit:start_apps/2 line 575
app_utils:manage_applications/6 line 126
lists:foldl/3 line 1263
rabbit:'-handle_app_error/1-fun-0-'/3 line 696
throw:{could_not_start,rabbitmq_mqtt,
{rabbitmq_mqtt,
{{shutdown,
{failed_to_start_child,'rabbit_mqtt_listener_sup_:::1883',
{shutdown,
{failed_to_start_child,
{ranch_listener_sup,{acceptor,{0,0,0,0,0,0,0,0},1883}},
{shutdown,
{failed_to_start_child,ranch_acceptors_sup,
{listen_error,
{acceptor,{0,0,0,0,0,0,0,0},1883},
eaddrinuse}}}}}}},
{rabbit_mqtt,start,[normal,[]]}}}}
Log file(s) (may contain more information):
/usr/local/var/log/rabbitmq/rabbit1#localhost.log
/usr/local/var/log/rabbitmq/rabbit1#localhost_upgrade.log
Terminal:
Most common reasons for this are:
* Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
* CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
* Target node is not running
In addition to the diagnostics info below:
* See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
* Consult server logs on node rabbit1#localhost
* If target node is configured to use long node names, don't forget to use --longnames with CLI tools
DIAGNOSTICS
===========
attempted to contact: [rabbit1#localhost]
rabbit1#localhost:
* connected to epmd (port 4369) on localhost
* epmd reports: node 'rabbit1' not running at all
other nodes on localhost: [rabbit]
* suggestion: start the node
Current node details:
* node name: 'rabbitmqcli-9206-rabbit#localhost'
* effective user's home directory: /Users/yashparekh
* Erlang cookie hash: +/3SPQl4T2w3zA11j1+o4Q==
I expect stop_app command to work in order to be able to join it to cluster.
Please let me know where i'm going wrong.
Thanks in advance.
{failed_to_start_child,
{ranch_listener_sup,{acceptor,{0,0,0,0,0,0,0,0},1883}},
{shutdown,
{failed_to_start_child,ranch_acceptors_sup,
{listen_error,
{acceptor,{0,0,0,0,0,0,0,0},1883},
eaddrinuse}}}}}}},
it means that the port 1883 (the MQTT port) is already used. you have to set also this port dynamically.

Airflow Adaptive Server connection failed

I want to connect my Airflow and Microsoft SQL Server. I configured my connection under 'connections' bar in 'Admin' box as mentioned in the following link:
http://airflow.apache.org/howto/manage-connections.html
But when I run my Dag task that is related to SQL server immedatly fails by following error:
[2019-03-28 16:16:07,439] {models.py:1788} ERROR - (18456, "Login failed for user 'XXXX'.DB-Lib error message 20018, severity 14:\nGeneral SQL Server error: Check messages from the SQL Server\nDB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (***.***.***.28:1433)\n")
My code from DAG for Micrososft Sql Connection is following:
sql_command = """
select * from [sys].[tables]
"""
t3 = MsSqlOperator( task_id = 'run_test_proc',
mssql_conn_id = 'FIConnection',
sql = sql_command,
dag = dag)
I verified ip address and port number kind of configuration things by establishing connection through pymssql library from my local computer. Test code is following:
pymssql.connect(server="***.***.***.28:1433",
user="XXXX",
password="XXXXXX"
) as conn:
df = pd.read_sql("SELECT * FROM [sys].[tables]", conn)
print(df)
Could you please share if you have experienced this issue?
By the way I am using VirtualBox in Ubuntu 16.04 LTS
I had the same problem because freetds-dev was missing on linux:
apt-get install freetds-dev

IBM LDAP configuration issue

I am doing IBM Tivoli LDAP installation on a CentOS 7 server. IBM DB2 is used for its database configuration. IBM DB2 is also installed on the same server. I am facing error in one step where we need to configure database for a directory server instance .
[root#dev02 sbin]# ./idscfgdb -I idsusr -a dasusr1 -w dasusr1 -l /home/idsusr -t ldapdb -n
GLPWRP123I The program '/opt/ibm/ldap/V6.3.1/sbin/64/idscfgdb' is used with the following arguments '-I idsusr -a dasusr1 -w ***** -l /home/idsusr -t ldapdb -n'.
You have chosen to perform the following actions:
GLPCDB023I Database 'ldapdb' will be configured.
GLPCDB024I Database 'ldapdb' will be created at '/home/idsusr'
GLPCDB035I Adding database 'ldapdb' to directory server instance: 'idsusr'.
GLPCTL017I Cataloging database instance node: 'idsusr'.
GLPCTL018I Cataloged database instance node: 'idsusr'.
GLPCTL008I Starting database manager for database instance: 'idsusr'.
GLPCTL009I Started database manager for database instance: 'idsusr'.
GLPCTL026I Creating database: 'ldapdb'.
GLPCTL028E Failed to create database: 'ldapdb'. The failure might have occurred because the system was not set up correctly before using the tool.
GLPCTL011I Stopping database manager for the database instance: 'idsusr'.
GLPCTL012I Stopped database manager for the database instance: 'idsusr'.
GLPCDB004E Failed to add database 'ldapdb' to directory server instance: 'idsusr'.
GLPCDB026W The program did not complete successfully. View earlier error messages for information about the exact error.
While checking the db logs , I found the following errors ----
2018-08-31-02.26.04.833398-240 E252943E465 LEVEL: Severe PID
: 31078 TID : 139826858813184 PROC : db2sysc 0
INSTANCE: idsusr NODE : 000 HOSTNAME: dev02 EDUID : 14
EDUNAME: db2wlmt 0 FUNCTION: DB2 UDB, oper system services,
sqloRequestSetPriority, probe:60 MESSAGE : ZRC=0xFFFFFBEE=-1042
SQL1042C An unexpected system error occurred.
DATA #1 : String, 29 bytes Unable to set thread priority
2018-08-31-02.26.04.849838-240 I253409E533 LEVEL: Error (OS)
PID : 31023 TID : 139826611349248 PROC : db2wdog 0
[idsusr] INSTANCE: idsusr NODE : 000 HOSTNAME: dev02
EDUID : 2 EDUNAME: db2wdog 0 [idsusr] FUNCTION:
DB2 UDB, oper system services, sqloSetPriorityHdl, probe:5934 MESSAGE
: ZRC=0x83000001=-2097151999
CALLED : OS, -, sched_setscheduler OSERR: EPERM (1)
DATA #1 : String, 51 bytes Failure setting absolute priority of kernel
thread.
2018-08-31-02.26.04.853094-240 E253943E466 LEVEL: Severe PID
: 31078 TID : 139826854618880 PROC : db2sysc 0
INSTANCE: idsusr NODE : 000 HOSTNAME: dev02 EDUID : 15
EDUNAME: db2wlmtm 0 FUNCTION: DB2 UDB, oper system services,
sqloRequestSetPriority, probe:60 MESSAGE : ZRC=0xFFFFFBEE=-1042
SQL1042C An unexpected system error occurred.
DATA #1 : String, 29 bytes Unable to set thread priority
2018-08-31-02.26.16.862999-240 E257903E347 LEVEL: Error (OS)
PID : 31130 TID : 140560770324352 PROC : db2star2
INSTANCE: idsusr NODE : 000 HOSTNAME: dev02 FUNCTION:
DB2 UDB, SQO Memory Management, sqloMemCreateSingleSegment, probe:100
CALLED : OS, -, shmget OSERR: EEXIST (17)
2018-08-31-02.26.18.002541-240 E258251E726 LEVEL: Error (OS)
PID : 31131 TID : 140560770324352 PROC : db2star2
INSTANCE: idsusr NODE : 000 HOSTNAME: dev02 FUNCTION:
DB2 UDB, oper system services, sqloexecs, probe:2222 MESSAGE :
ZRC=0x8300000D=-2097151987
2018-08-31-02.26.18.043809-240 I258978E433 LEVEL: Severe PID
: 31130 TID : 140560770324352 PROC : db2star2 INSTANCE:
idsusr NODE : 000 HOSTNAME: dev02 FUNCTION: DB2 UDB,
base sys utilities, sqleAdjustSharedMemoryLimits, probe:20 MESSAGE :
ZRC=0x840F0001=-2079391743=SQLO_ACCD "Access Denied"
DIA8701C Access denied for resource "", operating system return code
was "".
2018-08-31-02.26.18.050443-240 E259412E347 LEVEL: Error (OS)
PID : 31130 TID : 140560770324352 PROC : db2star2
INSTANCE: idsusr NODE : 000 HOSTNAME: dev02 FUNCTION:
DB2 UDB, SQO Memory Management, sqloMemCreateSingleSegment, probe:100
CALLED : OS, -, shmget OSERR: EEXIST (17)
2018-08-31-02.26.18.340053-240 I260908E491 LEVEL: Warning
PID : 31078 TID : 139826829453056 PROC : db2sysc 0
INSTANCE: idsusr NODE : 000 APPHDL : 0-7
APPID: *LOCAL.idsusr.180831062618 HOSTNAME: dev02 EDUID : 21
EDUNAME: db2agent (instance) 0 FUNCTION: DB2 UDB, bsu security,
sqlexLogPluginMessage, probe:20 DATA #1 : String with size, 66 bytes
Password validation for user dasusr1 failed with rc = -2146498587
At present , I am not able to understand for what issue is this step failing ? Is it some kernel error or is it some password validation issue ? I had made the password of dasusr1 same as dasusr1 so that it can be easily remembered and no issue comes . Can anybody guide me on this ?
Your script indicates that you are using dasusr1 (DB2 administration server user) while invoking idscfgdb. Documentation indicates database administrator ID should be used. Database administrator is different from DB2 administration server user. If you do not have separate user id for database administrator, you might be able to use instance owner id instead of database administrator.