Select count doesnt returns all rows in table - sql

I do have a table in the database, which supposed to have more than 1k rows. The DB is Postgress. I use the following command:
select count(*) from icdten; it returns 1000 which is wrong
and also
select * from icdten;
returns the first 1000 rows, which is wrong I want to have all of them. Googling didnt find, or maybe I was googling the wrong thing.
EDIT1: I use PgAdmin, maybe it is PgAdmin issue.. I just did not find that option looking through the interface. It supposed to have 14k rows.

Possible limit is set in PgAdmin. Please look at Options/Query tab of PgAdmin like described in http://www.pgadmin.org/docs/1.4/options-tab3.html (http://www.pgadmin.org/docs/1.16/options-tab4.html for version 1.16 you mentioned):
Maximum rows to retrieve can be set to 1000 (default is 100, people usually change this value), and also "Count rows if estimated less than" may affect to COUNT function (it can use count from table statistics instead of real table count, so try to set temporary this value to a big number)

Here is how to troubleshoot this issue:
First try from psql. While select count(*) should give you the same on both, maybe there is something going on. Also there is no limit there, so you can:
\o testfile
\i select * from icdten
\q (exits psql)
wc -l testfile
If that still shows 1000 rows then you probably have 1000 rows in that table, start making sure you are connected to the right db, querying the table you think you are, etc.
EXPLAIN ANALYSE
May be helpful in that case.

This is not an error. It's simply a default parameter set in some versions of PostgreSQL.
You can change the parameter by going to the PostgreSQL's installation directory and navigating to pgAdmin 4\web. The directory for version 12 looks like this:
C:\Program Files\PostgreSQL\12\pgAdmin 4\web
Open config.py with a text editor and search for ON_DEMAND_RECORD_COUNT. It is initially set to 1000.
##########################################################################
# Number of records to fetch in one batch in query tool when query result
# set is large.
##########################################################################
ON_DEMAND_RECORD_COUNT = 1000
You can comment this line out but it could result in other errors. I would suggest to change it to a large value like 10,000,000. If you can't save the file after the modification you need to copy the file somewhere else, make the changes and save, then copy back to the original folder and replace with the original file.

Related

BigQuery data using SQL "INSERT INTO" is gone after some time

Today I notice another strange behaviour of BigQuery.
I run UDF standard SQL in the BQ web ui:
CREATE TEMPORARY FUNCTION ...
INSERT INTO projectid.dataset.inserttable...
All seems good, the result of the UDF SQL are inserted in the insert table correct, I can tell from "Number of rows". But the table size is not correct, still keep the table size before run the insert query. Furthermore, I found all the inserted rows are gone after 1 hour later.
Some more info I found, when run a "DETELE FROM insert table true" or "SELECT ...", then the deleted number of rows and table size seems correct with the inserted data. But just can not preview the insert table correctly in the WEB UI.
Then I am guessing the "Detail" or "Preview" info of the table has time delay? May I know do you have any idea about this behaviour?
The preview may have a delay, so SELECT * FROM YourTable; will give the most up-to-date results, or you can use COUNT(*) just to verify that the number of rows is correct. You can think of it as being similar to streaming, if you have tried that, where some rows may be in the streaming buffer for a while before they make it into regular storage.

Oracle SQL Developer Spool function is limiting my output?

I am working with SQL right now and I am trying to write a bit of code that pulls a section of data from a database and saves it off to a file. This particular section of code is usually formatted all on one line and about 22,000-23,000 characters long on average. I can already pull some of the code but the pull stops after 4002 characters. My current code looks something like this:
SET HEADING OFF
SET ECHO OFF
SET LONG 100000
SET WRAP OFF
SPOOL output.txt
Select ________ (my select statement already works on its own);
SPOOL OFF;
I don't know the SQL language at all, I'm looking for some direction as to what functions I could research to help me out?
My end goal with this code is to be able to enter a value in, then have my code use that value to pull a value from one database. From there use both values to pull a long string of code from another database, would this kind of thing be possible in SQL?
Try adding this
SET SERVEROUPUT ON SIZE 1000000
I would really suggest that you try and view the SQLPlus Help.
It's really useful, and will explain all parameters to you, which is very useful.
Good Luck :)
In SQL Developer set
Tools > Preferences >> Navigate to Database > Worksheet > Max rows to print in a script(Increase number)

How to get the query displayed when a change is made to a table or a field in a table in Postgresql?

I have used mysql for some projects and recently I moved to postgresql. In mysql when I alter a table or a field the corresponding query will be displayed in the page. But such a feature was not found in postgresql(kindly excuse me if I'm wrong). Since the query was readily available it was very helpful for me to test something in the local database(without explicitly typing the query), copy the printed query and run it in the server. Now it seems like I've to manually do all the trick. Even though I'm familiar with the query operations,at times it can be pretty time consuming process. Can anybody help me? How can I get the corresponding query to get displayed in postgresql(like in mysql) whenever a change is made to the table?
If you use SELECT * FROM ... there should not be any reason for your output to not include newly added columns, no matter how you get your results - would that be psql in command line, PgAdmin3 or any other IDE.
After you add new columns, it is possible that these changes are still in open transaction in other window or SQL command - be sure to COMMIT such transaction. Note that your changes to data or schema will not be visible to any other database clients until transaction commits.
If your IDE still does not show changes, maybe you need to refresh list of tables or if that option is not available, restart your IDE. If that does not work still, maybe you should use better IDE.
If you have used SELECT field1, field2, ... FROM ... then you must add new fields into your SELECT statement(s) - but this would be true for any other SQL implementation, MySQL included.
You could use the LISTEN / NOTIFY mechanism in PostgreSQL to notify your client on altering the database schema.

Way to know the amount of data generated by a given query?

Typically, it is possible to know how many lines a query returns by using COUNT(*).
In the same manner, is there any way know how many megabytes for example the output of a given query is ?
Something like
SELECT MEMORYUSE(*) FROM bla bla bla
EDIT : I like the *exec sp_spaceused ... * approach, as it can be scripted!
For completeness, there are a couple of options to give you more information about the executing / executed query that you can view / set using SSMS as well. As shown below, the rowcount for the query is shown in the bottom right of SSMS by default. Also, I've highlighted the advanced query options, which you can set globally as shown here. Of course, you can also turn any of these options on for the particular statement or batch by including them in the query, i.e. 'set showplan_test on', etc..
Also you can turn on 'show client statistics' in SSMS as shown below (with sample output).
If you're using SQL Server, turn on Client Statistics and you 'll find "Bytes sent from client" and "Bytes received from server"
Here is related question
SQL Finding the size of query result
I think this will be useful:
SQL Server Query Size of Results Set
I don't think there is anyway without creating a temp table for the results and checking the size.

find and replace data in multiple records

I have a mysql 5 database table with a longtext field that permits html code (via markdown) to be entered as data. Unfortunately, I made a minor copy/paste error that I didn't catch until I had more that 200 records. Because it's the same error on each record
href:"http://someurl.com"
as opposed to
href="http://someurl.com"
it would be easier if there were some sql I could write that would allow me to find "href:" on all records and replace with "href=" in the same transaction, than if I have to edit each record individually. Is there anything I can do or am I just screwed?
You can do this:
UPDATE Data_Table
SET Html_Column = REPLACE(Html_Column, 'href:', 'href=');
if using phpmyadmin click on sql and run this UPDATETable_Name
SETColumn_Name= replace(Column_Name, 'href:', 'href=')