Connection::SQLGetInfoW: [Simba][ODBC] (11180) SQLGetInfo property not found: 1750 - google-bigquery

This is a setup where Microsoft's Power BI is the frontend for the data presentation to end-users. Behind it there's an on-premises PBI gateway which connects to BigQuery via Magnitude Simba ODBC driver for BigQuery. Since two days ago, after always working flawlessly, the PBI data refresh started failing due to timeout.
BigQuery ODBC driver's debug shows these two errors below in hundreds of rows per refresh:
SimbaODBCDriverforGoogleBigQuery_connection_9.log:Aug 29 15:21:54.154 ERROR 544 Connection::SQLGetInfoW: [Simba][ODBC] (11180) SQLGetInfo property not found: 180
SimbaODBCDriverforGoogleBigQuery_connection_9.log:Aug 29 15:22:49.427 ERROR 8176 Connection::SQLGetInfoW: [Simba][ODBC] (11180) SQLGetInfo property not found: 1750
And only occurence per refresh of this:
SimbaODBCDriverforGoogleBigQuery_connection_6.log:Aug 29 16:56:15.102 ERROR 6704 BigQueryAPIClient::GetResponseCheckErrors: HTTP error: Error encountered during execution. Retrying may solve the problem.
After some intensive research web search, it kinda looks like this might be related to 'wrong' coding, either wrong data types or strings that are too big, but nothing conclusive.
Other, smaller, refreshes to the same place work without issues.
Do we have any knowledgebase or reference for such cryptic error messages? Any advice on how to troubleshoot this?
Already tried:
Searching Google;
Updating Magnitude Simba ODBC driver for BigQuery to the latest
version;
Updating PBI Gateway to the latest version;
Rebooting the gateway server.

This issue occurs when ODBC drivers try to pull the data in streams which is via port 444. You either need to enable port 444 for optimal performance or disable streams, so that the data is pulled using pagination(Not recommended for huge data).

Related

GCP Server crashing with "cloudsql.enable_instance_password_validation"

Over the past day I have been experiencing this error very frequently which results in the cloud instance needing to be reset in order to continue connections:
ERROR: unrecognized configuration parameter "cloudsql.enable_instance_password_validation"
This is operating on a PostgreSQL 14 GCP Cloud SQL community shared 1 vCPU, 0.614 GB instance but was also tested on the standard 1 vCPU, 3.7 GB instance where the problem persisted.
The only code that has changed since this occurrence is a listen/notify call with a Golang PGX pool interface which has been reverted and the problem persists.
The problem hits regularly with any database calls (within 30 mins of a reset) and I have not set anything involving "enable_instance_password_validation" - I am also unable to find any parameters involving this name.
This was an interesting one, #enocom was correct in pointing out that the error experienced within Postgresql was a red herring. Throughout the process, there were several red herrings due to the problem spanning across both the Go server instance and the Postgresql server.
The real reason for the crashes was that the Golang PGX pool interface for interacting with the Postgres DB had a maximum 'DBMaxPools' cap of 14 connections by default. When we get to this number of connections the server hangs without any error log and all further connections are refused perpetually.
To solve this problem all that is required is adding a 'pool_max_conns' within the dbURI when calling:
dbPool, err := pgxpool.Connect(context.Background(), dbURI)
This can do done with something like:
dbURI = fmt.Sprintf(`postgresql://%s:%s#%s:%s/%s?pool_max_conns=%s`, config.DBUser, config.DBPassword, config.DBAddress, config.DBPort, config.DBName, config.DBMaxPools)
Detailed instructions can be found here: pgxpool package info
When setting PGX max conns, be sure to set this number to the same or similar to your maximum concurrency connections for the server instance to avoid this happening again. For GCP cloud run instances, this can be set within Deployment Revisions, under 'Maximum requests per container'

Support for Google BigQuery JDBC Driver using KNIME

I get an error when using the following JDBC driver to retrieve BigQuery data in KNIME
The Error Message is in the Database Connection Table Reader node as follow:
Execute failed: " Simba BigQueryJDBCDriver 100033" Error getting job status.
However, this only occurs after consecutively running a couple of similar data flows including the BigQuery driver, in KNIME.
After google searches, no extra info was found. And I already updated the driver / KNIME to the latest version. Als tried to rerun the flow on a different system with no success.
Is there a quota/limits attached to usin g this specific driver?
Hope someone is able to help!
I found this issue tracker, it seems that you opened it and there's already interaction with the BigQuery's Engineering team. Thus, I suggest following the interaction made there and subscribing to it to keep updated as you'll receive e-mails regarding its progress.
Regarding your question about the limits for the driver, the quotas and limits that you usually have in BigQuery will apply to the Simba driver too (I.e. Concurrent queries limit, execution time limit, maximum response size, etc...).
Hope it helps.
Just discovered a new query limit is set at company's Group level, some miscommunication internally. Sorry for bothering and thans for the feedback!

Error 40 and SqlAzureExecutionStrategy

I have a service fabric service (guest executable), using entityframework core, talking to sql azure.
From time to time I see the following error:
A network-related or instance-specific error occurred while establishing a connection
to SQL Server. The server was not found or was not accessible. Verify that the
instance name is correct and that SQL Server is configured to allow remote connections.
(provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)
It seems transient as there are numerous database transactions that occur without errors. This seems to occur more when a node is busy.
I've added code in start up to set EnableRetryOnFailure to set the SqlServerRetryingExecutionStrategy:
services.AddEntityFrameworkSqlServer()
.AddDbContext<MyDbContext>(options =>
options.UseSqlServer(_configuration.GetConnectionString("MyDbConnection"),
o => o.EnableRetryOnFailure()))
One major caveat, is at the moment I'm losing context so I don't know what data was attempting to be updated/inserted, so I don't know if it was eventually successful or not.
Couple of questions:
From the Transient Detection Code it doesn't look like error: 40 is caught, but my understanding is that error 40 may actually be another error (unclear). Is that correct?
Is this really a transient issue or does it mean I have another problem?
Without additional logging (working on it), do we know if the retry strategy logs the error, but still retry's and in fact may have been successful?
If this is a transient error, but it's not caught in the default execution strategy, why not? and what would be unintentded consequences of sub classing the SqlAzureExecutionStrategy to include this error.
I've seen this question: Sql Connection Error 40 Under Load, and it feels familiar, but he seems to have resolved it by tuning his database - which I will look at doing, I'm trying to make my code more resilient in the case of database issues.
There is a certain version of EF Core that caches the query or requests if the time span between two database transactions is very small, so update your packages to make sure you are using the most recent.
Query: Threading issues cause NullReferenceException in SimpleNullableDependentKeyValueFactory #5456
check these other links
https://github.com/aspnet/EntityFramework/issues/5456
https://github.com/aspnet/Security/issues/739
https://github.com/aspnet/EntityFramework/issues/6347
https://github.com/aspnet/EntityFramework/issues/4120

SQL Server - Timed Out Exception

We are facing the SQL Timed out issue and I found that the Error event ID is either Event 5586 or 3355 (Unable to connect / Network Issue), also could see few other DB related error event ids (3351 & 3760 - Permission issues) reported at different times.
what could be the reason? any help would be appreciated..
Can you elaborate a little? When is this happening? Can you reproduce the behavior or is it sporadic?
It appears SharePoint is involved. Is it possible there is high demand for a large file?
You should check for blocking/locking that might be preventing your query from completing. Also, if you have lots of computed/calculated columns (or just LOTS of data), your query make take a long time to compute.
Finally, if you can't find something blocking your result or optimize your query, it's possible to increase the timeout duration (set it to "0" for no timeout). Do this in Enterprise Manager under the server or database settings.
Troubleshooting Kerberos Errors. It never fails.
Are some of your webapps running under either the Local Service or Network Service account? If so, if your databases are not on the same machine (i.e. SharePoint is on machine A and SQL on machine B), authentication will fail for some tasks (i.e. timerjob related actions etc.) but not all. For instance it seems content databases are still accessible (weird, i know, but i've seen it happen....).

ODBC SQL Server driver error

I have a VB6 app that access's a database thru a ODBC Connection. It will run fine for a few hours then I get the following Error. Any Ideas?
[Microsoft][ODBC SQL Server Driver][DBNETLIB]ConnectionWrite(WrapperWrite())
From Googling the error, it sounds like that's just ADO's way of saying it can't connect - that the server is unreachable. Are there are any other services on that server or that use the database that become unavailable at the same time as this error? It sounds like the client is just losing its connection, so I'd look for anything around that - dropped network connectivity, or a downed/overwhelmed server, to name a few examples.
Does your program have to reach across a network to get to the Access file?
If so I'd look into any intermittent network connectivity issues, especially if your program is always connected to the data source.
Check any logs you can to see what's happening on your network at the time of the error.
If possible change your app to connect to the data source only when you need to access it and then disconnect when done.
Is there more than one instance of the program running on the same and/or different machines? If so, do they all get the error at the same time?
If possible try to have more than one instance of you program running on the same machine and see if they all get the error at the same time.
Also:
Does the error happen about the same amount of time after initial connection?
Does the error happen about the same amount of inactivity in your application?