Just created a table with 1000 rows. When I trying to add 1000 rows of data into specific column, PgAdmin just create another 1000 rows in same column and in total I have 2000 rows.
Used command: insert into Table_Name (column_name) values ('1956-08-07');
Want to add 1000 rows of data in a range of rows of any column.
How to add data started (as example) from row 1 of column_name and end with row 1000?
You need to use update statement instead of insert.
Should look like this:
update Table_Name set column_name = '1956-08-07'
This will update the value of column_name for each row.
Related
I have a table like:
|code|price|alter_price|
|----|-----|---------|
|ABC|10|12|
|DEF|11|13|
|GHI|15|18|
How can I update column 'price' and 'alter_price' with deviating values, based on the code-column (which has also the unique constrain)?
I don't want to create duplicates.
i tried:
cursor.execute("INSERT IGNORE INTO table_name VALUES (?,?,?)", (new_code,new_price,new_alter_price))
But then it does not update the table if the row with the given code exists already.
I am using Oracle SQL developer, We are loading tables with data and I need to validate if all the tables are populated and if there are any columns that are completely null(all the rows are null for that column).
For tables I am clicking each table and looking at the data tab and finding if the tables are populated and then have looking through each of the columns using filters to figure out if there are any completely null columns. I am wondering if there is faster way to do this.
Thanks,
Suresh
You're in luck - there's a fast and easy way to get this information using optimizer statistics.
After a large data load the statistics should be gathered anyway. Counting NULLs is something the statistics gathering already does. With the default settings since 11g, Oracle will count the number of NULLs 100% accurately. (But remember that the number will only reflect that one point in time. If you add data later, the statistics must be re-gathered to get newer results.)
Sample schema
create table test1(a number); --Has non-null values.
create table test2(b number); --Has NULL only.
create table test3(c number); --Has no rows.
insert into test1 values(1);
insert into test1 values(2);
insert into test2 values(null);
commit;
Gather stats and run a query
begin
dbms_stats.gather_schema_stats(user);
end;
/
select table_name, column_name, num_distinct, num_nulls
from user_tab_columns
where table_name in ('TEST1', 'TEST2', 'TEST3');
Using the NUM_DISTINCT and NUM_NULLS you can tell if the column has non-NULLs (num_distinct > 0), NULL only (num_distinct = 0 and num_nulls > 0), or no rows (num_distinct = 0 and num_nulls = 0).
TABLE_NAME COLUMN_NAME NUM_DISTINCT NUM_NULLS
---------- ----------- ------------ ---------
TEST1 A 2 0
TEST2 B 0 1
TEST3 C 0 0
Certainly. Write a SQL script that:
Enumerates all of the tables
Enumerates the columns within the tables
Determine a count of rows in the table
Iterate over each column and count how many rows are NULL in that column.
If the number of rows for the column that are null is equal to the number of rows in the table, you've found what you're looking for.
Here's how to do just one column in one table, if the COUNT comes back as anything higher than 0 - it means there is data in it.
SELECT COUNT(<column_name>)
FROM <table_name>
WHERE <column_name> IS NOT NULL;
This query return that what you want
select table_name,column_name,nullable,num_distinct,num_nulls from all_tab_columns
where owner='SCHEMA_NAME'
and num_distinct is null
order by column_id;
Below script you can use to get empty columns in a table
SELECT column_name
FROM all_tab_cols
where table_name in (<table>)
and avg_col_len = 0;
I am writing a query in Oracle to update the column based on same column
UPDATE TABLE SET A = 'CG-'||A
I HAVE THE DATA LIKE
COLUMN A
121
234
333
I NEED THE DATA LIKE
COLUMN A
CG-121
CG-234
CG-333
basically I am doing this for 30 Million records and its taking lot of time. Is there any way I can optimize this query?. If I create a Index on Column A does that improve the performance?
You have the correct query:
UPDATE TABLE
SET A = 'CG-' || A;
Here are different options.
First, you can do this in batches, say 100,000 rows at a time. This limits the size of the log.
Second, you can do a "replace" rather than update:
create table tempt as
select * from table;
truncate table "table";
insert into table ( . . . )
select 'CG-' || A, . . .
from tempt;
Third, you can use a generated column and dispense with the update (but only in the most recent versions of Oracle).
SQL Server 2008 - I have a table with 10 columns and many rows in which i want to delete with a condition like delete all rows which are in particular column those are less than 75 characters (about 10 words)
The easiest solution is to use the SQL function called len used like this: len(nameOffield)
In your case simply add the function to you where clause in the delete command like this:
DELETE FROM yourTableName where len(aParticularColumn) < 75
Update to answer: if your aParticularColumn is of datatype text or ntext you can use DATALENGTH instead of len. In this case it would be
DELETE FROM yourTableName where DATALENGTH(aParticularColumn) < 75
Microsoft Documentation to the DATALENGTH function
As #ogixologist said...
DELETE FROM table_name where len (column_name) < 75
And here i did by using CTE check it out !!!!!!!!!
;with cte
as
(
SELECT column_name, temp = LEN ( CAST ( column_name As nvarchar(4000) ) ) from table_name
)
delete from cte where temp<=75;
You can cast to nvarchar before finding length or else simply use Len(column_name)
instead of column_name replace your nvarchar column name whichever you want
instead of table_name replace your table name in which your data resides
To achieve this you can use the below approach - PLease follow the steps as mentioned
Create a temp table same as your MAIN TABLE AND Insert all the records in the temp table
You can use INSERT INTO SELECT Statement to achieve the first step
While inserting you will mention the column names and values - use CAST(varchar(75), [columnname]) which will truncate the data beyond 75 characters.
Then you can truncate your main table and insert all the records from temp table to main table.
Is there a way to duplicate a column from a current database table (copy all the column contents from table to a temporary table), Then
Convert the string value in the column and increment it by 1, then
Put all those values in a form of a string back into it's original table?
So pseudocode would look like:
copy column1 from tblReal into tmpcolumn in tblTemp (set tmpcolumn1 as nvarchar(265))
update tblTemp
set tmpcolumn1 = 'TESTDATA' + 1
copy tbmpcolumn1 from tblTemp into column1 in tblReal
So actually you want to change a string column, which holds actually a number, by incrementing its value by 1. Why would you need three steps for that? Just do an update statement on the column immediatly. I don't see why you need intermediate tables.
UPDATE tblReal SET column1 = column1 + 1
Piece of cake. You can use the cast function to transform the varchar to a number and back again in the update statement.