PostgreSql - Queries not executed correctly within PgAdmin - sql

I have installed PostgreSql 9.6 for Windows. And a problem arised with PgAdmin 4. First of all the UI looks corrupted: some lines between the components of a tree on the left side are missing.
Also queries are not executed correctly.
For example, when creating a table, the query returns successfully. However the table is not actually displayed after refreshing the Tables or Schemas component on the left. As well as the SELECT * FROM emp; query returns successfully but doesnt show any data.
This is the first time I'm facing such a problem with PostgreSql for two years working with it. And I haven't fount any relevant information after researching the issue. Will be grateful for any hints about the possible cause of such beahavoiur. Especially confusing is the currupted UI. Could it be that I need to install additional libraries?
Installation of PostgreSql 10.2 instead of 9.6 did not solve the problem.
I've also noticed that when clicking a + to expand the components on the left, they are all empty and don't look as usual in PgAdmin.

There were many issues in pgAdmin4 runtime(aka Desktop application) which was built on top of Qt5.
Could you try again with latest development version, Download: https://developer.pgadmin.org/~dpage/runtime-revamp/
Let us know if that solves your problem.

Related

what controls the order of UMLS linked entities from scispacy if the scores are all 1

i'm using Scispacy (which is awesome!) but when I type 'tau' into the app found here https://scispacy.apps.allenai.org/
the UMLS entity gives me the canonical name of 'MAPT gene' which is what I want.
But when I do the exact same thing in my python code based on the app code (see here https://gist.github.com/DeNeutoy/b20860b40b9fa9d33675893c56afde42)
the first canonical name on the list is 'uridine triacetate' (the second is 'MAPT gene')
in the app code there is the call 'if show_only_top:break' so I assume somehow their app implementation orders the linked entities differently.
if someone can explain the difference in ordering and how to fix that would be great thanks!!
This question was also asked on github (https://github.com/allenai/scispacy/issues/344), and answered there, but I am including the answer below in case anyone ends up here.
The demo app is not currently running the latest version of scispacy, and the inconsistent ordering of the entities was an issue that was fixed in scispacy version 0.4.0.

Uploading CSV using Sqlite3 Console - Different treatment of commas within quotes

I am currently experiencing problems associated with reading in CSV files to a sqlite3 database in a rails application. I have around 20 CSV files each with 20k lines of data in them which i need to read into a database on a regular basis.
Having experimented with a few different approaches, I have opted for using sqlite3 console as this enables me to quickly upload the data (in seconds as opposed to hours going through Rails using the code I was using previously). I tested this approach locally where I am running sqlite 3.7.15.2 and successfully read in the data to my table allitems using the following commands:
sqlite3 development.sqlite3
.separator ','
.import '../newdata.csv' allitems
Encouraged by my success, I proceeded to attempt to recreate this process on a live test site. However, in this case I get a number of errors indicating that the number of columns in newdata.csv doesn't always match the number of columns in allitems. I inspected the data in Excel and found all data to be in the correct number of columns required. On further investigation, I discovered that it was commas within text strings which were causing the issues and found some information online (http://www.sqlite.org/cvstrac/wiki?p=ImportingFiles)suggesting sqlite3 will always split on commas, regardless of whether they're inside quotes.
My first solution was to attempt to use a new separator which would never appear within the text strings (,|,), although this did succeed it also caused different problems as now many text fields when displayed on the webpage contain " at the start and end which has various knock on effects. I created an additional work around for this, converting my separator to "," and inserting " before and after fields which were not strings, but accounting for exceptions in the data is turning into a never ending fiddle.
Having lost patience with the above approach, I was looking for some advice as to how I could get around this problem? In particular, I am puzzled as to why i do not have any problems when I run the code locally, but face all these issues on the server. The server is currently running sqlite 3.7.3 but I don't know if this is the cause of the issue, or how I could update the version remotely if it was...
Thanks for your suggestions
Importing CSV files robustly is complex; there are still somewhat frequent bugfixes for sqlite3's import function.
Apparently, there was a necessary bugfix between versions 3.7.3 and 3.7.15.
The sqlite3 tool does not really have any dependencies.
Download or compile your own copy, rename it to whatever you like, and use that.

Using QT Designer to create TableView to Postgres Database

I'm creating a plugin in Quantum GIS that is using Postgres as the back end and QT Designer to make the GUI. I'm using psycopg2 to run scripts in the database and even fetch results of queries to set the values of labels in the GUI. This stuff is working fine for me.
What I would like to do now after some queries are run by clicking a 'calculate' button is for the resulting table to be shown in the plugin as a TableView. I know there widget exists expressly for the purpose of viewing tables but I can't quite figure out how to go about it. I'm not sure if I should be using psycopg2 or PySide, since most examples I have seen online use the latter.
I am wondering if someone can tell me which between psycopg2 and PySide should be used to create the TableView. Second, I am wondering what the 'signal' should be to the TableView widget to display the results of a query in Postgres. Lastly, is anyone can offer some instruction as to how to set up the code it would be hugely appreciated!
Cheers,
Roman
I've gone ahead and tried following the PyQt documentation, but as it's provided in C++ and I'm only a beginner programmer using Python I'm not sure if I've caught all the necessary amendments to the code syntax. Anyways, this is what I have so far:
db = QSqlDatabase.addDatabase("database")
db.setHostName("localhost")
db.setUserName("postgres")
db.setPassword("password")
#Not sure what to do to set the connection. The C++ documentation says to put "bool ok = db.open();"
model = QSqlQueryModel()
model.setQuery("SELECT name, density, deveff FROM public." +str(filename)+ "_rezoning ORDER BY gid;")
model.setHeaderData(0, Qt.Horizontal, "Name")
model.setHeaderData(1, Qt.Horizontal, "Density")
model.setHeaderData(2, Qt.Horizontal, "DevEff")
view = QTableView()
view.setModel(model)
view.show()
What happens when I click the button in my GUI to run the calculations, a small blank QGIS window briefly flashes and goes away. At least I'm not getting an error, but it's obviously not complete. I assume part of the issue is the connection to the database that is missing and that I do not know how to set. The other issue is that I would like this to show in the tableView widget in the GUI, but I'm not sure how to specify this...
Any further tips? I really appreciate it.
Roman
If you're planning to use Qt widgets and models, PySide (PyQt, or plain Qt/C++) is the way to go.
With bare psycopg2 you'll have a lot more work to do, and you'll need to implement your own model in order to leverage Qt's model/view classes. This is simply not the Qt way of doing things. PySide (and PyQt) has it own means to connect to a supported database, there's no need for pure Python database adapters like psycopg2. It uses the underlying libqt4-sql library (C++) and the installed plugins (QPSQL, QMYSQL, QSQLITE, etc).
Essentially you need to:
Connect to a database.
Instantiate a model (QSqlQueryModel, QSqlTableModel or a custom QAbstractTableModel derived class)
Attach that model to a view (ie. QTableView).
Take a look at the PySide QtSql Documentation and the PyQt documentation to get an idea. They're mostly compatible/interchangeable, but at a glance I see that the PyQt documentation looks more complete.
EDIT (after your edit):
A Qt GUI application requires an event loop to run, and that's provided by a QApplication instance. Before going any further with the specifics of your app, take the time to understand a few basic concepts first. Here's a nice Getting Started with PyQt Guide.

Check sql script valid

As part of a release we run a load of PL/SQL scripts against a database. Recently someone left the ; off the end of a line in one script that was called another script so this meant that script did not get run. Because this did not cause an error, it just didn't get run, it took quite a while to track down what had happened.
I want to check the scripts before they are run for lines in them that are missing either a ; at the end or a / on the line after. This is made more complicated as 'lines' in the script could actually span more than one line if it is statement or block of code.
To me this seems like to do this I'm going to have to parse the scripts then check they meet the above.
I've found ANTLR and wonder if this might be a way to do it since there seem to be existing PL/SQL grammars but looks like that's going to be a step learning curve for what's just a simple check.
Does anyone know an easy way or any other tools, eclipse plugins etc that I can use to check for lines in the scripts that are missing either a ; at the end or a / on the line after?
Update
We already do most of the stuff Tom H suggested. The scripts are run into our test server and we have a version table that gets updated at the end. The problem was that the missing semi-colon in the container script meant one script did not get run but the rest including the one to update the version number ran without errors. Therefore the problem only got picked up quite a way into testing. This needed the database restored before running the scripts with the missing semi-colon added so basically resulted in half a day of testing time being lost. If there was a simple way to check this before running the scripts into the test server it could save quite a bit of time.
I agree with MattH that you may be going about this the wrong way. I would just add an insert statement to the end of all of your scripts which insert a "version" row into a table in the database. At the end of your deployment scripts it's then an easy task to check that the version table has all of the correct rows in it.
Also, you should have all of your release scripts being run exactly as they will be in production against your QA server. That's where all of the testing takes place. You never do anything to the server besides what is in your release steps - you only run the release scripts and if those release scripts are ever changed then you refresh the QA server with them and redo testing.
When you go to production your release process has then been fully tested. As a fail safe measure you can also use tools like Red Gate's SQL Compare and SQL Data Compare to check that production matches the QA server. The data compare would only be against certain tables (look-up tables, etc.). If you have data changes to major tables (1M rows, etc.) then you can right a custom script to check that they are correct.
Even if the scripts are different for every release (and not part of a defined source control structure that creates or replaces database objects) I would adopt a practice of breaking the scripts down into the most fundamental units of work per file and deploying them through Ant with the standard sql task. You probably have these types of scripts:
CREATE or REPLACE dbobject...
SQL DML scripts
Anonymous PL/SQL blocks
If you standardize on a consistent statement delimiter (I suggest using "/" since it works with all of the cases above) and set the deployment to fail on error, then Ant will either deploy all of the files or indicate why it couldn't.
I think it would be very difficult to otherwise parse files of one or more SQL and/or PLSQL statements and find missing delimiters if there are no standards on delimiter choice or statements per file.
Just a thought, but are you going about this the wrong way?
I assume, at the file-level, the lack of a semi colon in the file was not a problem? but it only became a problem when run via the batch processing? If that's the case maybe you can change your batch processing to cope with this.
If it was the file, then testing should have picked it up. You don't want to parse your input files to make sure they compile etc.

SQL Server Version Updating Tables

I am part of a software development company looking for a good way to update my SQL Server tables when I put out a new version of the software. I know the answer is to probably use scripts in one form or another.
I am considering writing my own .NET program that runs the scripts to make it a bit easier and more user-friendly. I was wondering if there are any tools out there along those lines. Any input would be appreciated.
Suggest you look at Red_gate's SQlCompare
What kind of product are you using for your software installation? Products like InstallShield often now include SQL steps as an option for part of your install script.
Otherwise, you could look at using isql/osql to run your script from the command line through a batch file.
One of the developers where I'm currently consulting wrote a rather nifty SQL installer. I'll ask him when he gets in how he went about it.
I am using Red Gate's SQL Compare all the time. Also you need to make sure to provide a rollback script in case you need to go back to the previous version.
Have a look at DB Ghost Packager Plus.
Packages your source database and the compare and sync engine into a simple EXE for deployment. The installer EXE will automatically update any target schema to match the source on-the-fly at installation time.
Red Gate's SQL Compare to generate the change script, and Red Gate's Multi Script to easily send it to multiple SQL databases at the same time.