Getting Timeout while inserting in transaction using node-mssql package sqlserver - sql

I recently started using mssql package in my Node-Express Application for accessing the DB.
I read through various documents, including the implementation and tutorial on how to establish and use a connection. Here are the few things that I am confused upon.
Clarification
Is it a good practice to keep a connection open across the application, i.e. I have my current implementation like this.
global.sql = await mssql.connect(config, function (err) { /*LOGS*/});
And wherever, I query, I query like
function getItems(){
await sql.query`select * from tbl where val in (${values})
}
Is it the right way of doing things, or should I do it like this?
function getItems(){
const sql = mssql.connect()
await sql.query`select * from tbl where val in (${values})
}
Query:
I was going through the doc in the NPM readme.
There queries are done in 2 ways:
await sql.query`select * from mytable where id = ${value}`
2. ```
await new sql.Request().query('select 1 as number')
What is the difference between both, and which one has to be used when?
Blocker:
I am able to run a insert query by
await sql.query`insert into tbl (val1, val2) values (${item}, ${userId})`
// sql is the connection for the global variable as mentioned above
I tried creating the above mentioned query in Transaction. For that I have used this
transaction = new sql.Transaction()
await transaction.begin();
let request = new sql.Request(transaction);
await request.query(/* insert into tbl .... */)
It was working fine, but after some time, when I retried, the query started giving timeout with error Timeout: Request failed to complete in 15000ms
Can't understand why this is happening?
I tried running the same query from the sql server management studio, and it was working as expected

Related

API call to bigquery.jobs.insert failed: Not Found: Dataset

I'm working on importing CSV files from a Google Drive, through Apps Scripts into Big Query.
BUT, when the code gets to the part where it needs to send the job to BigQuery, it states that the dataset is not found - even though the correct dataset ID is already in the code.
Very much thank you!
If you are making use of the google example code, this error that you indicate is more than a copy and paste. However, validate that you have the following:
const projectId = 'XXXXXXXX';
const datasetId = 'YYYYYYYY';
const csvFileId = '0BwzA1Orbvy5WMXFLaTR1Z1p2UDg';
try {
table = BigQuery.Tables.insert(table, projectId, datasetId);
Logger.log('Table created: %s', table.id);
} catch (error) {
Logger.log('unable to create table');
}
according to the documentation in the link:
https://developers.google.com/apps-script/advanced/bigquery
It also validates that in the services tag you have the bigquery service enabled.

BigQuery: Table name missing dataset while no default dataset is set in the request

Here is the issue with a BigQuery query.
I know this query is missing a dataset name, so the error "Table name "my_table" missing dataset while no default dataset is set in the request."
select * from my_table;
Changing 'my_tabale' to 'my_dataset.my_table' will fix the issue.
But can somebody help me with setting a default dataset.
The error message clearly giving an indication that BigQuery has such an option.
Depending on which API you are using, you can specify the defaultDataset parameter when running your BigQuery job. More information for the jobs.query api can be found here https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query.
For example, using the NodeJS API for createQueryJob https://googleapis.dev/nodejs/bigquery/latest/BigQuery.html#createQueryJob, you can do something similar to this:
const options = {
keyFilename: process.env.GOOGLE_APPLICATION_CREDENTIALS,
projectId: process.env.GOOGLE_APPLICATION_PROJECT_ID,
defaultDataset: {
datasetId: process.env.BIGQUERY_DATASET_ID,
projectId: process.env.GOOGLE_APPLICATION_PROJECT_ID
},
query: `select * from my_table;`
}
const [job] = await bigquery.createQueryJob(options);
let [rows] = await job.getQueryResults();
The short answer is: SET ##dataset_id = 'whatever';
But, this is a scripting command, so you will have to enclose your SQL query in a BEGIN ... END block. Note the semicolon. Scripting gives you some added flexibility. On the other hand, it prevents the console from doing that fun thing where it estimates the processing cost.

How do you perform Asynchronous Query using BigQuery client library version 0.26.0-beta?

It seems that my query execution is now performed synchronously rather than asynchronously using the latest (as of Oct. 17. 2017) version of the BigQuery libraries available via "com.google.cloud:google-cloud-bigquery:0.26.0-beta".
I need to use the latest version so that I can use properly set the maxBillingTier option.
Here is my code-snippet:
QueryJobConfiguration request =
QueryJobConfiguration.newBuilder(query)
.setDefaultDataset(datasetId)
.setMaximumBillingTier(MAX_BILLING_TIER)
.build();
BigQuery.QueryOption pageSizeOption = BigQuery.QueryOption.of(
BigQuery.QueryResultsOption.pageSize(PAGE_SIZE));
BigQuery.QueryOption maxWaitOption = BigQuery.QueryOption.of(
BigQuery.QueryResultsOption.maxWaitTime(MAX_WAIT_MILLIS));
QueryResponse response = null;
try {
response = bigQuery.query(request,
pageSizeOption,
maxWaitOption);
} catch ( // exception-handling code deleted for brevity ) {
...
}
return response.getJobId();
A similarly formatted request using QueryRequest from version 0.24.0 instead of QueryJobConfiguration would have (quickly) returned the jobId, which I could then use to poll for status. Now, I suddenly have no simple way of reporting the status of the query to my calling code.
Update:
I was able to get asynchronous query results with this approach:
QueryJobConfiguration request =
QueryJobConfiguration.newBuilder(query)
.setDefaultDataset(datasetId)
.setMaximumBillingTier(MAX_BILLING_TIER)
.build();
JobInfo jobInfo = JobInfo
.newBuilder(request)
.setJobId(jobId)
.build();
Job job = bigQuery.create(jobInfo);
QueryResponse response = job.getQueryResults(pageSizeOption,
maxWaitOption);
return response.getJobId();
Of course, I need to add exception-handling, but that's the gist. However, it is far less elegant than the simpler format available in version 0.24.0-beta.
Is there a more elegant solution?
Would setting Priority to BATCH priority have an effect on this?

Dapper: Procedure or function has too many arguments specified

While using Dapper to call a stored procedure, I'm receiving the following error:
Procedure or function has too many arguments specified
I'm using DynamicParameters to add a list of simple parameters to the query.
The parameter code looks like this:
var parameters = new DynamicParameters();
parameters.Add(p.Name, p.Value, direction: p.Mode);
The query code looks like this:
var result = _connection.Query<T>(
string.Format("{0}.{1}", request.SchemaName, request.StoredProcedureName),
parameters,
commandType: CommandType.StoredProcedure,
transaction: _transaction);
The executing sql in the profiler shows as following:
exec dbo.storedProcedureName #ParameterNames1=N'ParameterName',#ParameterNames2=N'ParameterName',#RemoveUnused=1
#ParameterNames1 is not at all how the parameter is called. Actually, the names are being passed in as the values (N'ParameterName'). The #RemoveUnused parameter seems completely random to me, as it does not occur in the calling code at all.
The full code for this can be found here: GitHub project at lines 61 and 228.
Edit: I've found that the issue is caused by calling the same procedure twice, with different result sets. So the first time I'm calling it with Query, the second time with Query. Why Dapper is having trouble with this scenario is still a mystery.
I recently came across this issue and this appears to be caused by the following:
Your stored procedure can return multiple datasets (maybe based on a
condition parameter).
You are calling the stored procedure using
Query<T>() instead of QueryMultiple() and then mapping datasets
via Read<T>.
We recently upgraded from an old version of Dapper to v1.4 in order to support Table Variable Parameters and we started experiencing this behaviour as a direct result of the upgrade.
Solution:
Convert your Query<T> based code to a QueryMultiple implementation.
I simply can't reproduce this:
public void SO25069578_DynamicParams_Procs()
{
var parameters = new DynamicParameters();
parameters.Add("foo", "bar");
try { connection.Execute("drop proc SO25069578"); } catch { }
connection.Execute("create proc SO25069578 #foo nvarchar(max) as select #foo as [X]");
var tran = connection.BeginTransaction(); // gist used transaction; behaves the same either way, though
var row = connection.Query<HazX>("SO25069578", parameters,
commandType: CommandType.StoredProcedure, transaction: tran).Single();
tran.Rollback();
row.X.IsEqualTo("bar");
}
public class HazX
{
public string X { get; set; }
}
works fine. There is a RemoveUnused property on DynamicParameters, bit: when using dynamic parameters, that shouldn't be added. I've even tried using the template based constructor:
parameters = new DynamicParameters(parameters);
but again: this works fine. Is it possible that you're using a really, really old version of dapper? What version are you using?
I realize this is an old thread. I'm however using the latest version of the Nuget package (1.60.6) and encountered this problem recently.
To reproduce this, you'll need a Stored Procedure that based on an input parameter could return 1 or 2 (more than 1) resultset. In the code, I use 2 different extension methods to call it, too (QueryMultipleAsync sets the parameter to 1 or true and QueryAsync which sets it to 0 or false). If your test ends up calling the SP to return multiple resultset first, subsequent calls that need 1 resultset will fail with this error.
The only way I managed to solve this was to break down the SP into 2 so they have different names.
For reference, here is how I call the SP:
var data = await sqlConnection.QueryAsync<T>(
StoredProcedureName,
parms,
transaction: null,
commandTimeout: null,
commandType: CommandType.StoredProcedure)
and
var data = await sqlConnection.QueryMultipleAsync(
StoredProcedureName,
param: p,
commandType: CommandType.StoredProcedure)
.Map<Type1, Type2, long>
(
o1 => o1.Id,
o2 => o2.FkId ?? 0,
(o1, o2) => { o1.Children = o2.ToList(); }
);
This Dapper issue is caused by the Read method for reading datasets after a QueryMultiple.
In this case Dapper caches the parameters and if you call the same stored procedure with the same parameters using a Dapper Query method, it will fail.
To solve the problem, change the call to QueryMultiple method from like this:
var reader = conn.QueryMultiple (spName, pars, commandType: CommandType.StoredProcedure);
to this:
var cmd = new CommandDefinition (spName, pars, commandType: CommandType.StoredProcedure, flags: CommandFlags.NoCache);
var reader = conn.QueryMultiple (cmd);
Recently hit this problem caused by calling the same procedure twice using different Dapper methods.
The first call to the same SQL stored procedure was via .QueryMultiple. calling the same procedure with parameters again using .QuerySingleOrDefault resulted in the parameters being ParameterNames1 and RemoveUnused mentioned in the origninal question.

Grails transactions (not GORM based but using Groovy Sql)

My Grails application is not using GORM but instead uses my own SQL and DML code to read and write the database (The database is a huge normalized legacy one and this was the only viable option).
So, I use the Groovy Sql Class to do the job. The database calls are done in Services that are called in my Controllers.
Furthermore, my datasource is declared via DBCP in Tomcat - so it is not declared in Datasource.groovy.
My problem is that I need to write some transaction code, that means to open a transaction and commit after a series of successful DML calls or rollback the whole thing back in case of an error.
I thought that it would be enough to use groovy.sql.Sql#commit() and groovy.sql.Sql#rollback() respectively.
But in these methods Javadocs, the Groovy Sql documentation clearly states
If this SQL object was created from a DataSource then this method does nothing.
So, I wonder: What is the suggested way to perform transactions in my context?
Even disabling autocommit in Datasource declaration seems to be irrelevant since those two methods "...do nothing"
The Groovy Sql class has withTransaction
http://docs.groovy-lang.org/latest/html/api/groovy/sql/Sql.html#withTransaction(groovy.lang.Closure)
public void withTransaction(Closure closure)
throws java.sql.SQLException
Performs the closure within a transaction using a cached connection. If the closure takes a single argument, it will be called with the connection, otherwise it will be called with no arguments.
Give it a try.
Thanks James. I also found the following solution, reading http://grails.org/doc/latest/guide/services.html:
I declared my service as transactional
static transactional = true
This way, if an Error occurs, the previously performed DMLs will be rolled back.
For each DML statement I throw an Error describing the message. For example:
try{
sql.executeInsert("""
insert into mytable1 (col1, col2) values (${val1}, ${val2})
""")
catch(e){
throw new Error("you cant enter empty val1 or val2")
}
try{
sql.executeInsert("""
insert into mytable2 (col1, col2) values (${val1}, ${val2})
""")
catch(e){
throw new Error("you cant enter empty val1 or val2. The previous insert is rolledback!")
}
Final gotcha! The service when called from the controller, must be in a try catch, as follows:
try{
myService.myMethod(params)
}catch(e){
//http://jts-blog.com/?p=9491
Throwable t = e instanceof UndeclaredThrowableException ? e.undeclaredThrowable : e
// use t.toString() to send info to user (use in view)
// redirect / forward / render etc
}