How to get the query displayed when a change is made to a table or a field in a table in Postgresql? - sql

I have used mysql for some projects and recently I moved to postgresql. In mysql when I alter a table or a field the corresponding query will be displayed in the page. But such a feature was not found in postgresql(kindly excuse me if I'm wrong). Since the query was readily available it was very helpful for me to test something in the local database(without explicitly typing the query), copy the printed query and run it in the server. Now it seems like I've to manually do all the trick. Even though I'm familiar with the query operations,at times it can be pretty time consuming process. Can anybody help me? How can I get the corresponding query to get displayed in postgresql(like in mysql) whenever a change is made to the table?

If you use SELECT * FROM ... there should not be any reason for your output to not include newly added columns, no matter how you get your results - would that be psql in command line, PgAdmin3 or any other IDE.
After you add new columns, it is possible that these changes are still in open transaction in other window or SQL command - be sure to COMMIT such transaction. Note that your changes to data or schema will not be visible to any other database clients until transaction commits.
If your IDE still does not show changes, maybe you need to refresh list of tables or if that option is not available, restart your IDE. If that does not work still, maybe you should use better IDE.
If you have used SELECT field1, field2, ... FROM ... then you must add new fields into your SELECT statement(s) - but this would be true for any other SQL implementation, MySQL included.

You could use the LISTEN / NOTIFY mechanism in PostgreSQL to notify your client on altering the database schema.

Related

How to copy from table design from database to another vb.net access

The aim is when updating the application and update the access database without altering the data so update by update only the new tables or new columns so i want to copy the exact table with it's structure to the old database vb.net and access database.
what I've tried is detecting the differences between the old database and the new one by getting in combobox1 the only missed table and in combobox2 the missed columns in the old database in exact table already there in both database and get it's data type .
so i want to copy the entire table and then create only missed columns
thank you
There is not a built in tool to do this.
But, worse yet, there is no "generate" change scripts in Access
(Like say with SQL server).
So, how do you approach this issue? What do some of the accounting systems or commercial programs that use ms-access as the database?
Well, you have to build a kind of "up-grade" system in your software.
This means two things:
To add a new column to a table (for example), you NEVER go open up the access database with access, but "add" or "write" the code to add that field in question.
In fact, I had an applcation deployed out in the field - many desktops.
So, I had a code module called upgrade. And each time I needed a new field or whatever, then I would write the code to add that new colum.
AS LONG as I always added things into that code module, I was ok. (never break the rule for adding new fields, tables or even increasing the length of some field? - use code).
And it became quite easy after I had some code written. I would in fact often cut + paste a previous bit of code to add a new column to a table.
However, after about 5 years, that messy code module had 800+ lines of code in it!!!
But, I ALSO realized that MOST things like adding a new column or whatever? Same code over and over.
So, what I did next was built a "upgrade" table. It looked like this:
Version action SQL RunCode
2.5 AddTable tblCustomers
2.5 AddField "sql here to add table"
etc. etc.
So, I had a version number, and then I compare against the up-grade table. I had "action", and the code would simple loop this table, and do whatever.
So, for example, to add a field, you can use access "DDL" command (data definition commands - most SQL systems support this, and so does Access).
so, say like this:
' any new table code goes here:
If lngVer < 1148 Then
' add event Invoice text option
ExecuteSQLNR "ALTER TABLE dbo.Events ADD InvoiceText ntext NULL"
ExecuteSQLNR "ALTER TABLE dbo.Events ADD HideEventDate bit NULL default 0"
Or, say to increase a column lengh from 50 to 55
db.Execute "ALTER TABLE tblGroupRemind ALTER COLUMN Anotes text(255)", dbFailOnError
As noted, since oh so many the commands were VERY similar, then I started putting that information into a table, and then I would execute the required upgrades in a loop.
For a whole new table? Well, I thought that was too much code, so I always included a blank empty database - and for new tables, I would place them in that upgrade.accDB table - and "transfer/copy" the table from that upgrade database to the real one. That way, I could with great ease create a whole new table, and create in Access designers, and then add/copy that table to the "upgrade.accDB" database.
As noted? The above ideas an approaches work quite well.
In fact, over time, I found it LESS hassle while coding away to add the new column or whatever LESS effort then having to open up ms-acces, and then the table, and then the designer and make the changes.
However, the BIG issue with above?
Well, you have to get all users at least upgraded to your EXISTING schema, and there is no automated tools.
in fact, before I had any automated tools? I would open up note pad, and if I added some field to some table? I would simple type into note pad that new field in such and such table is required).
Then, when on customer site, I would open up their database, and then go look at the note pad document for the list of changes I was to make. (that is what I was doing before I started automating the process - and of course it not always practical to be "on site" or have the customers database.
But, ONCE I had all of the above working?
Then during development, I would open up my "upgrade" database, add the new row and action (new table, new column, (and more).
I even had a column that defined the function to run AFTER that one command. I mean, quite often when you add a new column, or change somthing in a table, often you need to copy data, or at least process some data after you make that change.
Once you get above going?
Then you simple NEVER make changes in the data tables directly, but use your "system" for this. And that works REALLY well.
For one, a customer could open up a older data file - say one from 4 or 5 years ago. The applcation version number would be detected, and then the upgrade code would run all though the versions to update that database. (and I did this automatic on startup - so they never even knew such a upgrade had occurred).
So, you just have to make sure that for each change you make, you put that code in your upgrade system, and you are done.
But, for existing systems? You have to look at what changes you made since last deploy, and write out the "ddl" commands (the alter table SQL commands).
There is no automated way of doing this.
As FYI?
One of the BEST and more valuable free tools in Visual Tools is the SQL server compare utility. It will not only automatic detect and tell you the changes between two SQL server databases, but will also upgrade for you. (very nice).
But, such a system is not available for Access. In fact, so valuable is that utility for SQL server, you might consider upgrading from Access to SQL server for this applcation. With that utility? I can work local, add fields, columns, tables and even stored procedures to that SQL database. When I am on site (or even by VPN), then I run that compare tool - it shows the changes, and ALSO has a button to update the target schema.
I don't know of a automated "schema" checker and updater for Access.
So, what I suggest for above ONLY works if you put such a system in place, and THEN as a developer always make your schema changes to your upgrade system, and never directly in the database with ms-access.

How can I schedule a script in BigQuery?

At last BigQuery supports using ; in the queries, so I can write more than one query in one "block", if I seperate them with semicolon.
If I run the code manually, it works. But I cannot schedule that.
When I want to schedule, I have two choices:
(New) Web UI: I must give a destination table. If I don't do it, I could not save the scheduled query. But all my queries are updates and inserts with different "destination tables". Like these:
UPDATE project.exampledataset.a
SET date = current_date()
WHEN TRUE
;
INSERT INTO project.otherdataset.b
SELECT c,d
FROM project.otherdataset.c
So I cannot even make a scheduling in the Web UI.
Classic UI: I tried this, because the official documentary states, that I should leave the "destination table" blank, and Classic UI allows it. I can setup the scheduling, but it doesn't run, when it should. I get the error message in email "Error status: Dataset specified in the query ('') is not consistent with Destination dataset 'exampledataset'."
AIK scripting (and using semicolon) is a very new feature in BigQuery, but I hope someone can help me.
Yes, I know that I could schedule every query one by one, but I would like to resolve it with one big script.
Looks like the scheduled query was defined earlier with destination dataset defined with APPEND/TRUNCATE type transaction. While updating the same scheduled query to a DML query, GUI doesn't show the dataset field / table name to update to NULL. Hence this error is coming considering the previously set dataset and table name in the scheduled query.
Hence the fix is to delete the scheduled query and create it from scratch with DML query option. It worked for me.
Scripting is supported in scheduled query now. However, scripting query, when being scheduled, doesn't support setting a destination table for now. You still need to use DDL/DML to make change to existing table.
E.g.:
CREATE OR REPLACE TABLE destinationTable AS
SELECT *
FROM sourceTable
WHERE date >= maxDate
As of 2022, the BQ Console UI will let you create a new scheduled query without a destination dataset, but it won't let you update a prior SELECT to use DDL/DML block syntax. However, you can use the BigQuery Data Transfer API to update the destinationDatasetId field, via transferconfigs/patch. Use transferconfigs/list to get the configId for a given scheduled query.
Note that you can either use the in-browser API Explorer, if you have the appropriate credentials, or write a programmatic solution. Also seems useful for setting/updating any other fields, including renaming scheduled queries.

What problems may occur while querying SQL databases with big amount of data over internet

I am having this big database on one MSSQL server that contains data indexed by a web crawler.
Every day I want to update SOLR SearchEngine Index using DataImportHandler which is situated in another server and another network.
Solr DataImportHandler uses query to get data from SQL. For example this query
SELECT * FROM DB.Table WHERE DateModified > Config.LastUpdateDate
The ImportHandler does 8 selects of this types. Each select will get arround 1000 rows from database.
To connect to SQL SERVER i am using com.microsoft.sqlserver.jdbc.SQLServerDriver
The parameters I can add for connection are:
responseBuffering="adaptive/all"
batchSize="integer"
So my question is:
What can go wrong while doing this queries every day ? ( except network errors )
I want to know how is SQL Server working in this context ?
Further more I have to take a decicion regarding the way I will implement this importing and how to handle errors, but first I need to know what errors can arise.
Thanks!
Later edit
My problem is that I don't know how can this SQL Queries fail. When i am calling this importer every day it does 10 queries to the database. If 5th query fails I have to options:
rollback the entire transaction and do it again, or commit the data I got from the first 4 queries and redo somehow the queries 5 to 10. But if this queries always fails, because of some other problems, I need to think another way to import this data.
Can this sql queries over internet fail because of timeout operations or something like this?
The only problem i identified after working with this type of import is:
Network problem - If the network connection fails: in this case SOLR is rolling back any changes and the commit doesn't take place. In my program I identify this as an error and don't log the changes in the database.
Thanks #GuidEmpty for providing his comment and clarifying out this for me.
There could be issues with permissions (not sure if you control these).
Might be a good idea to catch exceptions you can think of and include a catch all (Exception exp).
Then take the overall one as a worst case and roll-back (where you can) and log the exception to include later on.
You don't say what types you are selecting either, keep in mind text/blob can take a lot more space and could cause issues internally if you buffer any data etc.
Though just a quick re-read and you don't need to roll-back if you are only selecting.
I think you would be better having a think about what you are hoping to achieve and whether knowing all possible problems will help?
HTH

Problem with a MS Access query after a "Compact and repair" operation

I have an Access application that use the classical front-end/back-end approach. Yesterday, the backend got corrupted for a reason I don't know. So I opened the backend with Access 2003 and access asked me if I wanted to repair the file, I said yes and it seemed to work.
I can open the database see the tables contents and run most of the queries.
However there is an access query that doesn't work with a specific where clause.
Example :
// This works in the original DB, but not in the compacted one :
SELECT a, b, c
FROM tbl1 INNER JOIN tbl2 ON tbl1.d = tbl2.d
WHERE e = 3 AND tbl2.f = 1;
// This works in both the original and the compacted one :
SELECT a, b, c
FROM tbl1 INNER JOIN tbl2 ON tbl1.d = tbl2.d
WHERE e = 3;
When I try to run the queries, nothing happens. The access process start to use most of the CPU and the GUI stop responding. If I run the query from the query editor, I can use Ctrl+Break to stop the execution. I tried to give the query lot of time and it didn't help.
I've checked the execution plan in showplan.out and it seems correct (at least it should not takes forever to execute)
I tried to compact the DB again. I tried to import the tables in a new DB. I even tried to import the tables and their data in a mdb file that was in a now good state (from a backup).
Anyone have an idea?
Sounds like an index was corrupted and when that happens, it's dropped during the compact. Check for a system table called MSysCompactErrors -- you'll have to show hidden objects and/or system objects in Tools | Options | VIEW.
Never compact a Jet MDB without making a backup beforehand. Because of that rule, the COMPACT ON CLOSE function is completely useless, as it's not cancellable, so you always make sure it's turned off in all MDBs.
I don't know what type of meta data Access brings along when it imports a table from one database into another one. If the meta data is corrupted, importing the table to another database wouldn't necessarily resolve the problem. If practical, you might try creating the tables from scratch in a brand new database and then just exporting and importing (or copying and paste appending) the data into the new database.
I've never seen a table get corrupted like this in such a small database, although with Access anything is possible. Could there be something wrong with the data?
I'd try recreating the query fresh (new name, etc.), and see what happens.
You could even try copying it (even within the same DB or to a brand new one). If that works, the worst case scenario is you have to copy all the objects across to a new DB.
Is there an index on the field tbl2.f?
Also try going into that table in datasheet view, sort tbl2.f in ascending sequence and see if there is anything really strange in the first or last records.
Do you have access to a SQL Server installation? You could use the Upsizing Wizard under the Tools -> Database Utilities menu to copy the data to SQL Server, and see if you get the same problem there.

SQL query giving wrong result on linked server

I'm trying to pull user data from 2 tables, one locally and one on a linked server, but I get the wrong results when querying the remote server.
I've cut my query down to
select * from SQL2.USER.dbo.people where persId = 475785
for testing and found that when I run it I get no results even though I know the person exists.
(persId is an integer, db is SQL Server 2000 and dbo.people is a table by the way)
If I copy/ paste the query and run it on the same server as the database then it works.
It only seems to affect certain user ids as running for example
select * from SQL2.USER.dbo.people where persId = 475784
works fine for the user before the one I want.
Strangely I've found that
select * from SQL2.USER.dbo.people where persId like '475785'
also works but
select * from SQL2.USER.dbo.people where persId > 475784
brings back records with persIds starting at 22519 not 475785 as I'd expect.
Hope that made sense to somebody
Any ideas ?
UPDATE:
Due to internal concerns about doing any changes to the live people table, I've temporarily moved my database so they're both on the same server and so the linked server issue doesn't apply. Once the whole lot is migrated to a separate cluster I'll be able to investigate properly. I'll update the update once this happens and I can work my way through all the suggestions. Thanks for your help.
The fact that LIKE operates is not a major clue: LIKE forces integers to string (so you can say WHERE field LIKE '2%' and you will get all records that start with a 2, even when field is of integer type). Your incorrect comparisons would lead me to think your indexes are corrupt, but you say they work when not used via the link... however, the selected index might be different depending on the use? (I seem to recall an instance when I had duplicate indexes and only one was stale, although that was too long ago to recall the exact cause).
Nevertheless, I would try rebuilding your index using the DBCC DBREINDEX (tablenname) command. If it turns out that doing so fixes your query, you may want to rebuild them all: here is a script for rebuilding them all easily.
Is dbo.people a table or a view? I've seen something similar where the underlying table schema had been changed and dropping and recreating the view fixed the problem, although the fact that the query works if run directly on the linked server does indicate something index based..
Is the linked server using the same collation? Depending on the index used, I could see something like this perhaps happening if the servers were not collation compatible, but the linked server was set up with collation compatible (which tells Sql Server it can run the query on the remote server).
I would check the following:
Check your definition on the linked server, and confirm that SQL2 is the
server you expect it to be
Check and compare the execution plans both from the remote and local servers
Try linking by IP address rather than name, to ensure you have the proper machine
Put the code into a stored procedure on the remote machine, and try calling that instead
Sounds like a bug to me - I;ve read of some issues along these lines, btu can't remember specifically what. What version of SQL Server are you running?
select * from SQL2.USER.dbo.people where persId = 475785
for a PersID which fails how does:
SELECT *
FROM OpenQuery(SQL2, 'SELECT * FROM USER.dbo.people WHERE persId = 475785')
behave?