two column table partition in hive - hive

i have a table with two columns
id value
abc 11
xyz 12
pqr 11
mno 13
pqr 12
stu 13
wxy 11
i have to partition this table with "value" by hive or sql queries.

SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.exec.dynamic.partition=true;
create table table (id string) partitioned by (value string) stored as ORC tblproperties ("orc.compress" = "SNAPPY");
INSERT INTO table1 PARTITION (value)
select * from table where value is not NULL;

after explore it i got the ans.
create table table (id string) partitioned by (value string) stored as ORC tblproperties ("orc.compress" = "SNAPPY");
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.exec.dynamic.partition=true;
SET hive.exec.max.dynamic.partitions=2048;
SET hive.exec.max.dynamic.partitions.pernode=256;
INSERT INTO table1 PARTITION (value)
select * from table where value is not NULL;

Related

SQL Server trigger if update() with condition

I have a table in SQL Server that has 3 columns: ID, NAME, VALUE.
This table has 2 rows with ID=1 and ID=2.
(The value of ID doesn't change).
Every moment of time the value of column VALUE changes. Every time the column VALUE changes, I want to insert this updated value into a table (Device1 for ID=1, Device1 for ID=2).
I created a trigger for updating as if update(VALUE) begin...but it doesn't do the work.
Is there a way to add a condition in if update(VALUE) to work in each row
I used this query
Create Trigger insertIntoDevices
On ITEMS
For Update
As
If Update(VALUE)
Begin
Insert Into table device1
Where ID = 1
Insert Into table device2
Where ID = 2
End
With this query each update in column VALUE inserts VALUE into device1 and device1 and that duplicates values in my tables device1 and device2.
Table creation on the below ;
CREATE TABLE TestTable(
ID INT IDENTITY(1,1),
Name VARCHAR(5),
VALUE NVARCHAR(50)
)
GO
CREATE TABLE device1(
VALUE NVARCHAR(50)
)
GO
CREATE TABLE device2(
VALUE NVARCHAR(50)
)
GO
Insertion for ID=1 and ID=2
GO
INSERT INTO TestTable(Name,Value)
VALUES('Test1','test1'),('TEST2','test2')
Firstly,To find new values for each row you can use 'inserted' but it can be include non-changed data. For example: ID =1,Name='test1' and VALUE='test1' and updation of name column will be also included in inserted.
Secondly,To find old values for each row you can use 'deleted'.
After that we find the values that only includes updation for VALUE.
To Sump Up,
Finding Inserted rows and deleted rows will give us the result of each rows new and old values. We used intersection (INNER JOIN ) to find only changed values.
CREATE TRIGGER [dbo].[insertIntoDevices]
ON [dbo].[TestTable]
AFTER UPDATE
AS
BEGIN
DECLARE #InsertedTable table (
InsertedID INT,
InsertedName VARCHAR(5),
InsertedVALUE NVARCHAR(50)
)
DECLARE #DeletedTable table (
DeletedID INT,
DeletedName VARCHAR(5),
DeletedVALUE NVARCHAR(50)
)
INSERT INTO #InsertedTable(InsertedID,InsertedName,InsertedVALUE)
SELECT ID,[Name],[Value] FROM inserted;
INSERT INTO #DeletedTable(DeletedID,DeletedName,DeletedVALUE)
SELECT ID,Name,Value FROM deleted;
INSERT INTO device1(VALUE)
SELECT UpdatedValue = it.InsertedVALUE
FROM #InsertedTable as it
INNER JOIN #DeletedTable as dt ON it.InsertedID = dt.DeletedID AND ISNULL(dt.DeletedVALUE,'') <> ISNULL(it.InsertedVALUE,'')
WHERE it.InsertedID = 1
INSERT INTO device2(VALUE)
SELECT UpdatedValue = it.InsertedVALUE
FROM #InsertedTable as it
INNER JOIN #DeletedTable as dt ON it.InsertedID = dt.DeletedID AND ISNULL(dt.DeletedVALUE,'') <> ISNULL(it.InsertedVALUE,'')
WHERE it.InsertedID = 2
END
To test I used the updation queries on the below;
--Example 1
UPDATE TestTable
SET Value='selam'
WHERE ID = 1
--Example 2
UPDATE TestTable
SET Value='hi'
WHERE ID = 2

Copy data from one column (bigint) to another column (bigint[]) in postgres sql?

I need to copy all data from one column customerId to another new column (customerIds - which is in different format) in the same table. There is a column called customerId whose type is bigint and I need to copy data from this column to customerIds whose data type is bigint[].
Is there any way to do this in postgres sql? I know how to copy data from one column to other column which is in same format but not sure how to do this when new column is array.
Same table and column is in same format.
UPDATE table_name
SET customerId = customerIds
As you have only one id per customer you can update simply the forst element of the array
CREATE TABLE table_name (customerId BIGINT, customerIds BIGINT[]);
INSERT INTO table_name VALUES(1);
INSERT INTO table_name VALUES(2);
INSERT INTO table_name VALUES(3);
INSERT INTO table_name VALUES(4);
INSERT INTO table_name VALUES(5);
UPDATE table_name SET customerIds[1] = customerId ;
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
UPDATE 5
SELECT * FROM table_name
customerid
customerids
1
{1}
2
{2}
3
{3}
4
{4}
5
{5}
SELECT 5
fiddle
maybe this helps:
with cte as (Select 4::BIGINT as num )
SELECT array_agg(cte.num) from cte;
for your problem:
UPDATE table_name
SET customerId = arrag_agg(customerIds)
answer is just an idea. not tested

Hive : Cannot copy data from unpartitioned table to partitioned table

I have an unpartitioned table
create table tabUn
(
col1 string,
col2 int
)
Lets say it has some data. Next I created a partitioned table
CREATE EXTERNAL TABLE tabPart
(
col1 string,
col2 int
)
PARTITIONED BY (col_date string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE
LOCATION '/path/to/table';
Finally, I tried to copy the data over
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
INSERT OVERWRITE TABLE tabPart PARTITION(data_date='2018-10-01')
SELECT
(
col1,
col2,
'2018-10-01' as col_date
) select * FROM tabUn;
but I get the below error
FAILED: NullPointerException null
What am I doing wrong?
Your select statement seems to be incorrect.
INSERT OVERWRITE TABLE tabPart PARTITION (data_date='2018-10-01')
SELECT col1,col2,'2018-10-01' as col_date from tabUn;

Loop through rows and add a number for a column for each of them automatically in SQL Server

I have got an over 500 rows table with a column called ID which is of datetype INT. Currently the values are all NULL.
What I want to achieve is to populate the ID column with an incremental number for each row, say 1, 2, 3, 4, ..., 500 etc.
Please give me a help with any idea how to achieve this by SQL script.
using ROW_NUMBER in a CTE is one way, but here's an alternative; Create a new id1 column as int identity(1,1), then copy over to id, then drop id1:
-- sample table
create table myTable(id int, value varchar(100));
-- populate 10 rows with just the value column
insert into myTable(value)
select top 10 'some data'
from sys.messages;
go
-- now populate id with sequential integers
alter table myTable add id1 int identity(1,1)
go
update myTable set id=id1;
go
alter table myTable drop column id1;
go
select * from myTable
Result:
id value
----------- -------------
1 some data
2 some data
3 some data
4 some data
5 some data
6 some data
7 some data
8 some data
9 some data
10 some data
While you could also drop and recreate ID as an identity, it would lose its ordinal position, hence the temporary id1 column.
#create one temporary table
CREATE TABLE Tmp
(
ID int NOT NULL
IDENTITY(1, 1),
field(s) datatype NULL
)
#suppose your old table name is tbl,now pull
#Id will be auto-increment here
#dont select Id here as it is Null
INSERT INTO Tmp (field(s) )
SELECT
field(s)
FROM tbl
#drop current table
DROP TABLE tbl
#rename temp table to current one
Exec sp_rename 'Tmp', 'tbl'
#drop your temp table
#write alter command to set identitry to Id of current table
good luck

oracle sql precision,scale ,insert calculate and drop

table = mytable
temp col = tempcol
col = mycol
currently contains 5000 rows various values from 99999.99999 to 0.00001
I need to keep the data create a script to create a temp column,round the values to 7,3 update mycol to a null value, modify my column from 10,5 to 7,3 return the data to mycol, drop the temp column. Job done.
so far
SELECT mycol
INTO tempcol
FROM mytable
update mytable set mycol = null
alter table mytable modify mycol number (7,3)
SELECT tempcol
INTO mycol
FROM mytable
drop tempcol
can you please fill in the missing gaps are direct me to a solution.
Well first of all a NUMBER(10,5) can store results from -99999 to 99999 while NUMBER(7,3) interval is only [-9999,9999] so you will potentially encounter conversion errors. You probably want to change the column into a NUMBER(8,3).
Now your plan seems sound: you can not reduce the precision or the scale of a column while there is data in that column, so you will store data into a temporary column. I would do it like this:
SQL> CREATE TABLE mytable (mycol NUMBER(10,5));
Table created
SQL> /* populate table */
2 INSERT INTO mytable
3 (SELECT dbms_random.value(0, 1e10)/1e5
4 FROM dual CONNECT BY LEVEL <= 1e3);
1000 rows inserted
SQL> /* new temp column */
2 ALTER TABLE mytable ADD (tempcol NUMBER(8,3));
Table altered
SQL> /* copy data to temp */
2 UPDATE mytable
3 SET tempcol = mycol,
4 mycol = NULL;
1000 rows updated
SQL> ALTER TABLE mytable MODIFY (mycol NUMBER(8,3));
Table altered
SQL> UPDATE mytable
2 SET mycol = tempcol;
1000 rows updated
SQL> /* cleaning */
2 ALTER TABLE mytable DROP COLUMN tempcol;
Table altered