qt mysql query giving different result on different machine - sql

Following code works on my pc but gives error on other pc's. how is it possible to run this successfully on all machines.
QSqlQuery query;
QString queryString = "SELECT * FROM " + parameter3->toAscii() + " WHERE " + parameter1->toAscii() + " = \"" + parameter2->toAscii() + "\"";
bool retX = query.exec(queryString);
What pre requisite should be fulfilled for this to run on any pc

In troubleshooting, if you isolate your query and it returns the result you anticipated ( such as you have done utilizing qt creator to verify the query returns a result of true), the next step would be to take a close look at your code and verify that you are passing the proper parameters into the query for execution.
I have a virgin machine I utilize for this purpose. I am a software engineer by trade and I am fully aware that i have a ton of software installed on my PC which the common user may/will not have installed. So the virgin allows me to test the code in stand-alone form.
I suggest implementing a message box prior to the execution of your query which shows the query to be executed. This will verify the query is correct on the "other machines".

Certain dll's were needed. in my case qtguid4.dll, qtcored4.dll and qtsqld4.dll. There was a size difference. Once matched it worked on a pc. However, on other pc's i still get an error "The application failed to initialize 0xc000007b ....."
How is it possible to make an application run.
Brgds,
kNish

Related

How to reproduce with a manual query: [Amazon](500310) Invalid operation: failed to find conversion function from "unknown" to integer;

I recently added a maven dependency to a Dropwizard project:
<dependency>
<groupId>com.amazon.redshift</groupId>
<artifactId>redshift-jdbc42-no-awssdk</artifactId>
<version>1.2.45.1069</version>
</dependency>
which is replacing the org.postgresql.Driver previously used, and since then some of my queries are returning
! Causing: org.skife.jdbi.v2.exceptions.UnableToCreateStatementException: java.sql.SQLException: [Amazon](500310) Invalid operation: failed to find conversion function from "unknown" to integer;
I suspect I might have dozens of queries that will need adapting to work with the new driver.
Because I don't want to have to restart my server every time I want to test whether a change to a query fixes the problem, I want to run the queries manually against the Redshift DB, so that I can quickly identify the part of the query that needs fixing.
My problem: I cannot reproduce the error when running the query manually from inside an IntelliJ DB Console. I even downloaded the JAR of the Amazon Redshift Driver from https://docs.aws.amazon.com/redshift/latest/mgmt/configure-jdbc-connection.html (I downloaded this one: https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/1.2.45.1069/RedshiftJDBC42-no-awssdk-1.2.45.1069.jar) and used it to set up a new DB connection in IntelliJ. But still, running the exact same query manually from the IntelliJ DB Console won't give me the error.
Can anyone think of what configuration might be causing the query to give an error when run by the server compared to running it manually from the Console ? And how to get it to either stop throwing the exception on the server or start causing the same error when run in a Console ?
I'm assuming the issue lies somewhere between JDBI and the Amazon Redshift Driver. In any case, here's a fix that solved the problem for now:
For a query like this:
//language=PostgreSQL
public static final String MY_QUERY = "" +
"WITH my_table(someid, groupname) AS (\n" +
" SELECT :someId, :groupName\n" +
")\n" +
"SELECT 'something'\n" +
"FROM my_table mt\n" +
"FULL OUTER JOIN another_table at ON at.accountid = mt.someid AND at.groupname = mt.groupname";
Adding \\:\\:int got rid of the error:
//language=PostgreSQL
public static final String MY_QUERY = "" +
"WITH my_table(someid, groupname) AS (\n" +
" SELECT :someId\\:\\:int, :groupName\\:\\:text\n" +
")\n" +
"SELECT 'something'\n" +
"FROM my_table mt\n" +
"FULL OUTER JOIN another_table at ON at.accountid = mt.someid AND at.groupname = mt.groupname";

Microsoft Access Database Engine causing fatal communications error with Windows Process Activation

I'm using the Access Database Engine Redistributable to read an Access database in my .net application and for some reason whenever I use a join to query data from the access database to fill a datatable it causes a fatal communications error with the Windows Process Activation Service. I can populate datatables without issue as long as there is only one table. As soon as I add just one join, I get the system error. There are no errors in my application to trap, the system error occurs and then the application stops processing. This is only happening on one server. My local computer doesn't seem to have this issue. I'm using the exact same redistributable on my local computer and the server. Two things boggle my thinking why does a join cause an issue and why does it cause a system error and doesn't push it up to the app? If I'm using a single table query, it works fine.
Steps to populate datatable:
accessConnection = New
System.Data.OleDb.OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0; Data Source='" &
uploadedFileName & " '; Persist Security Info=False;")
accessDataAdapter = New System.Data.OleDb.OleDbDataAdapter("select * from Table1 INNER JOIN
Table2 ON Table1.PK = Table2.PK", accessConnection)
accessDtSet = New System.Data.DataSet
accessDataAdapter.Fill(accessDtSet) - Application fails here
accessView = accessDtSet.Tables("Locations").DefaultView
This is just a guess but maybe try the following provider in your connection string:
Provider=Microsoft.Jet.OLEDB.4.0

Paraview looping with SaveScreenshot in a server is very slow

I mean to get a series of snapshots, at a sequence of time steps, of a layout with two views (one RenderView + one LineChartView).
For this I put together a script, see below.
I do
ssh -X myserver
and there I run
~/ParaView-5.4.1-Qt5-OpenGL2-MPI-Linux-64bit/bin/pvbatch myscript.py
The script is extremely slow to run. I conceive the following reasons/bottlenecks:
Communication of the graphic part (ssh -X) from the remote server to my computer.
Display of graphics in my computer.
Processing in the server.
Is there a way to assess which is the bottleneck, with my current resources?
(For instance, I know I could get a faster communication to assess item 1, but I cannot do that now.)
Is there a way to accelerate pvbatch?
The answer likely depends on my system, but perhaps there are generic actions I can take.
Creation of the layout with two views
...
ans = GetAnimationScene()
time_steps = ans.TimeKeeper.TimestepValues
for istep in range(len(time_steps)) :
tstep = time_steps[istep]
ans.AnimationTime = tstep
fname = "combo" + '-' + '{:08d}'.format(istep) + '.png'
print( "Exporting image " + fname + " for time step " + str(tstep) )
SaveScreenshot(fname, viewLayout1, quality=100)
Why do you need the -X ?
Just set DISPLAY to :0 and do not forward graphics.
The bottleneck is most likely the rendering on your local display. If your server has a X server, you can perform the rendering on your server by setting accordingly the DISPLAY environnement variable as Mathieu explained.
If your server does not have a X server running, then the best option is to build Paraview on your server using either the OSMesa backend or the EGL backend (if you have a compatible graphic card on it).

Transaction inside of code

I'm having an issue where I'm preaty not sure how to resolve this and I want to know what is the best approach I should consider in order to achieve this task.
We are developping an application VB.net 2.0 and SQL 2005. Users are allowed to cancel a reception based on a purchase which may contains many received goods. But, during the process of cancellation, some questions are asked to users such as "Do you want to cancel Good #1". If yes, delete. Then, "Do you want to cancel Good #2", no, do not delete and one another question (if received item is issued, a process must be made manualy by the user). And, at the end, if all goods were successfully cancelled, we have to cancel the reception itself. But sometime, if an error occurs or some conditions occurs once asked to user in this process, we want to cancel any actions made from the beginning and make it back to his original state. So I thought about Transaction.
I know there is Transaction for SQL which can be used and I know good enough how to use it, but I can't realy use this as user must perform actions which possibly cancel this transaction.
I also remembered TransactionScope from .NET 2.X and over which can achieve something similar and I also know as well how to use it. The problem comes with TransactionScope and MSDTC. When using this, we still getting an error which said :
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.
I've tried what is describe here in another stack post and it works great... until user restard their computer. EVERY time users restart their computer, they must put value back. Plus, per default, no computer have this value set to On. At least on 10 computers bases, none were activated. There is something like 300 computers on which this program is installed so it's surely not the good things to consider neither.
So anyone have an idea of how I can acheive this? Is there anything else doing transaction via code which I can use?
NOTE1 : I know some would say, first ask conditions to user and maintain values in memory. Once done, if everything went well, go with delete. But what if an error occurs when deleting let's say, goods #4? And how can I give to a store procedure a dynamic list of goods to be deleted?
NOTE2 : Sorry for my english, I usualy talk french.
NOTE3 : Any exemple in C# can be provide also as I know both VB and C#.
Assuming you already have similar stored procedure to manage cancelation:
create proc CancelGood (#goodID int)
as
SET NOCOUNT ON
SET XACT_ABORT ON
begin transaction
update table1 set canceled = 1
where GoodID = #GoodID
update table2 set on_stock = on_stock + 1
where GoodID = #GoodID
commit transaction
VB code adds a string to some canceledGoods list if user selects 'Oui'. I'm not familiar with VB.Net; in c# it would look like:
canceledGoods.Add (string.Format("exec dbo.CancelGood {0}", goodID));
Than, if there is at least one string in canceledGoods, build and execute batch:
batch = "BEGIN TRANSACTION" +
" BEGIN TRY " +
string.Join (Environment.NewLine, canceledGoods.ToArray()) +
" END TRY" +
" BEGIN CATCH " +
" -- CODE TO CALL IF THERE WAS AN ERROR" +
" ROLLBACK TRANSACTION" +
" RETURN" +
" END CATCH" +
" -- CODE TO CALL AFTER SUCCESSFULL CANCELATION OF ALL GOODS" +
" COMMIT TRANSACTION"
conn.ExecuteNonQuery (batch);

SQL Server Agent 2005 job runs but no output

Essentially I have a job which runs in BIDS and as as a stand lone package and while it runs under the SQL Server Agent it doesn't complete properly (no error messages though).
The job steps are:
1) Delete all rows from table;
2) Use For each loop to fill up table from Excel spreasheets;
3) Clean up table.
I've tried this MS page (steps 1 & 2), didn't see any need to start changing from Server side security.
Also SQLServerCentral.com for this page, no resolution.
How can I get error logging or a fix?
Note I've reposted this from Server Fault as it's one of those questions that's not pure admin or programming.
I have logged in as the proxy account I'm running this under, and the job runs stand alone but complains that the Excel tables are empty?
Here's how I managed tracking "returned state" from an SSIS package called via a SQL Agent job. If we're lucky, some of this may apply to your system.
Job calls a stored procedure
Procedure builds a DTEXEC call (with a dozen or more parameters)
Procedure calls xp_cmdshell, with the call as a parameter (#Command)
SSIS package runs
"local" SSIS variable is initialized to 1
If an error is raised, SSIS "flow" passes to a step that sets that local variable to 0
In a final step, use Expressions to set SSIS property "ForceExecutionResult" to that local variable (1 = Success, 0 = Failure)
Full form of the SSIS call stores the returned value like so:
EXECUTE #ReturnValue = master.dbo.xp_cmdshell #Command
...and then it gets messy, as you can get a host of values returned from SSIS . I logged actions and activity in a DB table while going through the SSIS steps and consult that to try to work things out (which is where #Description below comes from). Here's the relevant code and comments:
-- Evaluate the DTEXEC return code
SET #Message = case
when #ReturnValue = 1 and #Description <> 'SSIS Package' then 'SSIS Package execution was stopped or interrupted before it completed'
when #ReturnValue in (0,1) then '' -- Package success or failure is logged within the package
when #ReturnValue = 3 then 'DTEXEC exit code 3, package interrupted'
when #ReturnValue in (4,5,6) then 'DTEXEC exit code ' + cast(#Returnvalue as varchar(10)) + ', package could not be run'
else 'DTEXEC exit code ' + isnull(cast(#Returnvalue as varchar(10)), '<NULL>') + ' is an unknown and unanticipated value'
end
-- Oddball case: if cmd.exe process is killed, return value is 1, but process will continue anyway
-- and could finish 100% succesfully... and #ReturnValue will equal 1. If you can figure out how,
-- write a check for this in here.
That last references the "what if, while SSIS is running, some admin joker kills the CMD session (from, say, taskmanager) because the process is running too long" situation. We've never had it happen--that I know of--but they were uber-paranoid when I was writing this so I had to look into it...
Why not use logging built into SSIS? We send our logs toa database table and then parse them out to another table in amore user friendly format and can see every step of everypackage that was run. And every error.
I did fix this eventually, thanks for the suggestions.
Basically I logged into Windows with the proxy user account I was running and started to see errors like:
"The For each file enumerator is empty"
I copied the project files across and started testing, it turned out that I'd still left a file path (N:/) in the properties of the For Each loop box, although I'd changed the connection properties. Easier once you've got error conditions to work with. I also had to recreate the variable mapping.
No wonder people just recreate the whole package.
Now fixed and working!