syslog-ng mysql queue's but never writes - syslog-ng

I have a mysql destination that is queueing the logs, but never writing them, I have removed explicit-commits which from my understanding is the part that tells it to enable queueing, but it still does not seem to ever write it
destination d_sql {
sql(
type(mysql)
host("<MyHost>") username("<MyUser>") password("<MyPassword>")
database("<Database>")
table("<Table>")
columns( "username" , "user_agent" , "message" , "host" , "appointment_code", "os" , "browser", "error_text", "user_id" , "ip" )
values("${json.username}" , "${json.user_agent}" , "$(format-json --scope dot-nv-pairs)" , "${json.host}" , "${appointment_code}" , "${ua_os}" , "${ua_browser}" , "${error_text}" , "${json.user_id}" , "${json.ip}" )
flags(dont-create-tables)
);
};
This destination works on our older server ( Ubuntu 18.04 with syslog-ng 3.13 ) But not on the newer server ( Ubuntu 22.04 with syslog-ng 3.35 ) I am not seeing any error's to even start digging into so any pointers would be greatly appreciated

Related

AWS Glue - getSink() is throwing "No such file or directory" right after glue_context.purge_s3_path

I am trying to purge a partition of a glue catalog table and then recreate the partition using getSink option (similar to truncate/load partition in database)
For purging the partition , I am using glueContext.purge_s3_path option with retention period = 0 . The partition is getting purged successfully .
self._s3_path=s3://server1/main/transform/Account/local_segment/source_system=SAP/
self._glue_context.purge_s3_path(
self._s3_path,
{"retentionPeriod": 0, "excludeStorageClasses": ()}
)
Here Catalog database = Account , Table = local_segment , Partition_key = source_system
However when I am trying to recreate the partition right after the purge step , I am getting "An error occurred while calling o180.pyWriteDynamicFrame. No such file or directory" from getSink writeFrame .
If I remove the purge part then getSink is working fine and is able to create the partition and write the files .
I even tried "MSCK REPAIR TABLE" in between purge and getSink but no luck .
Shouldn't getSink create a partition if it does not exist i.e. purged from previous step ?
target = self._glue_context.getSink(
connection_type="s3",
path=self._s3_path_prefix,
enableUpdateCatalog=True,
updateBehavior="UPDATE_IN_DATABASE",
partitionKeys=["source_system"]
)
target.setFormat("glueparquet")
target.setCatalogInfo(
catalogDatabase=f"{self._target_database}",
catalogTableName=f"{self._target_table_name}"
)
target.writeFrame(self._dyn_frame)
Where -
self._s3_path_prefix = s3://server1/main/transform/Account/local_segment/
self._target_database = Account
self._target_table_name = local_segment
Error Message :
An error occurred while calling o180.pyWriteDynamicFrame. No such file or directory 's3://server1/main/transform/Account/local_segment/source_system=SAP/run-1620405230597-part-block-0-0-r-00000-snappy.parquet'
Try to check if you have permission for this object on s3. I got the same error and once I configured the object to be public (just for test), it worked. So maybe it’s a new object and your process might not have access.

System i SQL stream file from IFS

IBM System i V7R3M0 I would like to pull a .json file with SQL. The SQL cannot resolve QSYS2.IFS_READ.
select x.id, x.username, x.firstName, x.lastName, x.email
from json_table(
QSYS2.IFS_READ(PATH_NAME => '/FileTransferData/userList.json'),
'lax $' columns (
id INTEGER PATH 'lax $.id',
username VARCHAR(30) FORMAT JSON PATH 'lax $.username',
firstName VARCHAR(30) FORMAT JSON PATH 'lax $.firstName',
lastName VARCHAR(30) FORMAT JSON PATH 'lax $.lastName',
email VARCHAR(75) FORMAT JSON PATH 'lax $.email'
)
)
as x;
The error is:
SQL State: 42704
Vendor Code: -204
Message: [SQL0204] IFS_READ in QSYS2 type *N not found. Cause . . . . . : IFS_READ in QSYS2 type *N was not found.
The job log in QZDASOINIT tells me the same thing, that I do not have IFS_READ in QSYS2
Job 111566/QUSER/QZDASOINIT started on 04/12/21 at 12:23:06 in subsystem QUSRWRK in QSYS. Job entered system on 04/12/21 at 12:23:06.
User MOLNARJ from client 10.111.0.24 connected to server.
The following special registers have been set: CLIENT_ACCTNG: ,
CLIENT_APPLNAME: System i Navigator - Run SQL Scripts, CLIENT_PROGRAMID:
cwbunnav.exe, CLIENT_USERID: MOLNARJ, CLIENT_WRKSTNNAME:
FS2ISYMOB943.umassmemorial.org
Trigger Q__OSYS_QAQQINI_BEFORE_INSERT_______ in library QTEMP was added to file QAQQINI in library QTEMP.
Trigger Q__OSYS_QAQQINI_AFTER_INSERT________ in library QTEMP was added to file QAQQINI in library QTEMP.
Trigger Q__OSYS_QAQQINI_BEFORE_UPDATE_______ in library QTEMP was added to file QAQQINI in library QTEMP.
Trigger Q__OSYS_QAQQINI_AFTER_UPDATE________ in library QTEMP was added to file QAQQINI in library QTEMP.
Trigger Q__OSYS_QAQQINI_BEFORE_DELETE_______ in library QTEMP was added to
file QAQQINI in library QTEMP.
Trigger Q__OSYS_QAQQINI_AFTER_DELETE________ in library QTEMP was added to file QAQQINI in library QTEMP.
Object QAQQINI in QTEMP type *FILE created.
1 objects duplicated.
IFS_READ in QSYS2 type *N not found.
I'm between a rock and a hard place. My outsourced support company claims that
I need to create a program to utilize the API. However, I believe the API is not installed.
I have based my work on IBM technical documents such as this:
https://www.ibm.com/docs/en/i/7.4?topic=is-ifs-read-ifs-read-binary-ifs-read-utf8-table-functions
Running the example in this document (with file path and name changed to mine) give the same error.
SELECT * FROM TABLE(QSYS2.IFS_READ(PATH_NAME => '/FileTransferData/userList.json', END_OF_LINE => 'CRLF'));
This link indicates PTF SF99703 level 22 is required. You can check what's installed or available with :
with iLevel(iVersion, iRelease) as (
select
OS_VERSION, OS_RELEASE
from
sysibmadm.env_sys_info )
select
case PTF_GROUP_CURRENCY when 'INSTALLED LEVEL IS CURRENT' then '' else PTF_GROUP_CURRENCY end,
PTF_GROUP_ID "ID",
PTF_GROUP_TITLE "Title",
PTF_GROUP_LEVEL_INSTALLED "Installed",
PTF_GROUP_LEVEL_AVAILABLE "Available",
ptf_group_level_available - ptf_group_level_installed "Diff",
date(to_date(PTF_GROUP_LAST_UPDATED_BY_IBM, 'MM/DD/YYYY')) "Available since",
current date - date(to_date(PTF_GROUP_LAST_UPDATED_BY_IBM, 'MM/DD/YYYY')) "Days since available",
PTF_GROUP_RELEASE "Release",
PTF_GROUP_STATUS_ON_SYSTEM "Status"
from
iLevel,
systools.group_ptf_currency P
where
ptf_group_id = 'SF99703'
order by
ptf_group_level_available - ptf_group_level_installed desc;

Access violation at address 04FD6CC2 in module 'dbxora.dll'. Read of address 00000004

I have created DELPHI database application which will use DBX TSQLConnection component to connect to Oracle database (19c version).
I'm getting access violation error , When i call oracle listagg function (SQLQuery1.open --in this line).
When i debugged , i got the error in below object file,
FileName : Data.DBXDynalink.pas
Function : function TDBXDynalinkReader.DerivedNext: Boolean;
Error Line : DBXResult := FMethodTable.FDBXReader_Next(FReaderHandle);
Actual Error : Access violation at address 04FD6CC2 in module 'dbxora.dll'. Read of address 00000004
Below is my code ,
...SQLQuery1 initialization....
SQLQuery1.CommandText := Trim(memoSQLText.Lines.Text); // Assigning query
SQLQuery1.Open; // Exactly on this line i'm getting error
if SQLQuery1.RecordCount > 0 then
....Do something here....
Note : Same query is executing in other versions which are all below to Oracle version 19C (19.3)
IDE version used for application development : DELPHI XE3 (i have checked with DELPHI 10.1 Berlin also)
DB version : Oracle 19C (19.3)
Steps to reproduce :
// 1.Execute below queries in order to create testing data ,
create table myuserlist(myuser varchar2(10));
Insert into myuserlist(myuser) values('karthik');
Insert into myuserlist(myuser) values('aarush');
Insert into myuserlist(myuser) values('yuvan');
// 2.Try to open the below mentioned query using TSQLConnection and TSQLQuery
select listagg(a.myuser, ', ') within group (order by a.myuser) as myusernames from myuserlist a
Sample project is available in the GitHub,
https://github.com/yuvankarthik/DELPHI-DemoOracleConnect.git
Help me to resolve this issue.

How to connect to HIVE using python?

I'm using a CDH cluster which is kerberous enabled and I'd like to use pyhive to connect to HIVE and read HIVE tables. Here is the code I have
from pyhive import hive
from TCLIService.ttypes import TOperationState
cursor = hive.connect(host = 'xyz', port = 10000, username = 'my_username', auth = 'KERBEROS', database = 'poc', kerberos_service_name = 'hive' ).cursor()
I'm getting the value of xyz from hive-site.xml under hive.metastore.uris, however it says xyz:9083, but if I replace 10000 with 9083, it complains.
My problem is when I connect (using port = 10000), it gives me permission error when executing a query, while I can read that table if I use HIVE CLI or beeline. My question is 1) if the xyz is the value I should use? 2) which port I should use? 3) if all is correct, why I'm still getting a permission issue?

How do I verify SQL Server versions, Including version, service pack, cumulative update and patch

I have 250 VM's for different clients using different version of SQL server installed.
I ran the below command to get the details but the information was not successful
SELECT
SERVERPROPERTY('productversion'),
SERVERPROPERTY('productlevel'),
SERVERPROPERTY ('edition'),
##version,
SERVERPROPERTY('PatchLevel')
Can someone please help? I need the information for details like SQL Server version, service pack, cumulative update and patches installed to the server.
Hope you need this.,
select ##VERSION
Hope this helps you.,
SELECT SERVERPROPERTY('MachineName') AS [MachineName],
SERVERPROPERTY('ServerName') AS [ServerName],
SERVERPROPERTY('InstanceName') AS [Instance],
SERVERPROPERTY('IsClustered') AS [IsClustered],
SERVERPROPERTY('ComputerNamePhysicalNetBIOS') AS [ComputerNamePhysicalNetBIOS],
SERVERPROPERTY('Edition') AS [Edition],
SERVERPROPERTY('ProductLevel') AS [ProductLevel], -- What servicing branch (RTM/SP/CU)
SERVERPROPERTY('ProductUpdateLevel') AS [ProductUpdateLevel], -- Within a servicing branch, what CU# is applied
SERVERPROPERTY('ProductVersion') AS [ProductVersion],
SERVERPROPERTY('ProductMajorVersion') AS [ProductMajorVersion],
SERVERPROPERTY('ProductMinorVersion') AS [ProductMinorVersion],
SERVERPROPERTY('ProductBuild') AS [ProductBuild],
SERVERPROPERTY('ProductBuildType') AS [ProductBuildType], -- Is this a GDR or OD hotfix (NULL if on a CU build)
SERVERPROPERTY('ProductUpdateReference') AS [ProductUpdateReference], -- KB article number that is applicable for this build
SERVERPROPERTY('ProcessID') AS [ProcessID],
SERVERPROPERTY('Collation') AS [Collation],
SERVERPROPERTY('IsFullTextInstalled') AS [IsFullTextInstalled],
SERVERPROPERTY('IsIntegratedSecurityOnly') AS [IsIntegratedSecurityOnly],
SERVERPROPERTY('FilestreamConfiguredLevel') AS [FilestreamConfiguredLevel],
SERVERPROPERTY('IsHadrEnabled') AS [IsHadrEnabled],
SERVERPROPERTY('HadrManagerStatus') AS [HadrManagerStatus],
SERVERPROPERTY('InstanceDefaultDataPath') AS [InstanceDefaultDataPath],
SERVERPROPERTY('InstanceDefaultLogPath') AS [InstanceDefaultLogPath],
SERVERPROPERTY('BuildClrVersion') AS [Build CLR Version];
Here is a query that gives you all the Server Properties available through the SERVERPROPERTY system function.
You can run this on old SQL versions and if the Property didn't exist back then, the function will just safely return a NULL.
SELECT
SERVERPROPERTY('BuildClrVersion') AS BuildClrVersion
,SERVERPROPERTY('Collation') AS [Collation]
,SERVERPROPERTY('CollationID') AS CollationID
,SERVERPROPERTY('ComparisonStyle') AS ComparisonStyle
,SERVERPROPERTY('ComputerNamePhysicalNetBIOS') AS [ComputerNamePhysicalNetBIOS]
,SERVERPROPERTY('Edition') AS [Edition]
,SERVERPROPERTY('EditionID') AS EditionID
,SERVERPROPERTY('EngineEdition') AS EngineEdition
,SERVERPROPERTY('FilestreamConfiguredLevel') AS FilestreamConfiguredLevel
,SERVERPROPERTY('FilestreamEffectiveLevel') AS FilestreamEffectiveLevel
,SERVERPROPERTY('FilestreamShareName') AS FilestreamShareName
,SERVERPROPERTY('HadrManagerStatus') AS HadrManagerStatus
,SERVERPROPERTY('InstanceDefaultDataPath') AS InstanceDefaultDataPath
,SERVERPROPERTY('InstanceDefaultLogPath') AS InstanceDefaultLogPath
,SERVERPROPERTY('InstanceName') AS [Instance]
,SERVERPROPERTY('IsAdvancedAnalyticsInstalled') AS IsAdvancedAnalyticsInstalled
,SERVERPROPERTY('IsClustered') AS [IsClustered]
,SERVERPROPERTY('IsFullTextInstalled') AS [IsFullTextInstalled]
,SERVERPROPERTY('IsHadrEnabled') AS IsHadrEnabled
,SERVERPROPERTY('IsIntegratedSecurityOnly') AS [IsIntegratedSecurityOnly]
,SERVERPROPERTY('IsLocalDB') AS IsLocalDB
,SERVERPROPERTY('IsPolybaseInstalled') AS IsPolybaseInstalled
,SERVERPROPERTY('IsSingleUser') AS IsSingleUser
,SERVERPROPERTY('IsXTPSupported') AS IsXTPSupported
,SERVERPROPERTY('LCID') AS LCID
,SERVERPROPERTY('LicenseType') AS LicenseType
,SERVERPROPERTY('MachineName') AS [MachineName]
,SERVERPROPERTY('ProcessID') AS [ProcessID]
,SERVERPROPERTY('ProductBuild') AS ProductBuild
,SERVERPROPERTY('ProductBuildType') AS ProductBuildType
,SERVERPROPERTY('ProductLevel') AS ProductLevel
,SERVERPROPERTY('ProductMajorVersion') AS ProductMajorVersion
,SERVERPROPERTY('ProductMinorVersion') AS ProductMinorVersion
,SERVERPROPERTY('ProductUpdateLevel') AS ProductUpdateLevel
,SERVERPROPERTY('ProductUpdateReference') AS ProductUpdateReference
,SERVERPROPERTY('ProductVersion') AS [ProductVersion]
,SERVERPROPERTY('ResourceLastUpdateDateTime') AS ResourceLastUpdateDateTime
,SERVERPROPERTY('ResourceVersion') AS ResourceVersion
,SERVERPROPERTY('ServerName') AS [ServerName]
,SERVERPROPERTY('SqlCharSet') AS SqlCharSet
,SERVERPROPERTY('SqlCharSetName') AS SqlCharSetName
,SERVERPROPERTY('SqlSortOrder') AS SqlSortOrder
,SERVERPROPERTY('SqlSortOrderName') AS SqlSortOrderName
,##VERSION AS [Version]