hive drop all partitions keep recent 4 days paritions - hive

I have a table with partitions like below :
TABLE logs PARTITION(year = 2019, month = 06, day = 18)
partitions 'year', 'month' and 'day' are in string format.
I need to drop partitions keeping last seven days partitions.
and need to run the job every week so that, logs tables will have 7 days logs at the start of every week.

You can use <= operator in partition specification.
Demo:
use mydb;
drop table test_partition_drop;
CREATE TABLE test_partition_drop
(col1 STRING)
PARTITIONED BY (part_year string, part_month string, part_day string);
INSERT INTO TABLE test_partition_drop PARTITION (part_year='2019', part_month='06', part_day='09') VALUES ('01');
INSERT INTO TABLE test_partition_drop PARTITION (part_year='2019', part_month='06', part_day='10') VALUES ('01');
INSERT INTO TABLE test_partition_drop PARTITION (part_year='2019', part_month='06', part_day=11) VALUES ('02');
INSERT INTO TABLE test_partition_drop PARTITION (part_year='2019', part_month='06', part_day=12) VALUES ('03');
INSERT INTO TABLE test_partition_drop PARTITION (part_year='2019', part_month='06', part_day=13) VALUES ('05');
INSERT INTO TABLE test_partition_drop PARTITION (part_year='2019', part_month='06', part_day=14) VALUES ('06');
INSERT INTO TABLE test_partition_drop PARTITION (part_year='2019', part_month='06', part_day=15) VALUES ('06');
INSERT INTO TABLE test_partition_drop PARTITION (part_year='2018', part_month='06', part_day=14) VALUES ('01');
INSERT INTO TABLE test_partition_drop PARTITION (part_year='2018', part_month='06', part_day=15) VALUES ('02');
INSERT INTO TABLE test_partition_drop PARTITION (part_year='2019', part_month='05', part_day=14) VALUES ('03');
INSERT INTO TABLE test_partition_drop PARTITION (part_year='2019', part_month='04', part_day=15) VALUES ('04');
Calculate min partition keys to be dropped and pass to your DROP PARTITION script:
var_year="$(date -d "7 days ago" +"%Y")"
var_month="$(date -d "7 days ago" +"%m")"
var_day="$(date -d "7 days ago" +"%d")"
hive -e "
use mydb;
ALTER TABLE test_partition_drop DROP IF EXISTS PARTITION (part_year<'${var_year}');
ALTER TABLE test_partition_drop DROP IF EXISTS PARTITION (part_year='${var_year}', part_month<'${var_month}');
ALTER TABLE test_partition_drop DROP IF EXISTS PARTITION (part_year='${var_year}', part_month<='${var_month}', part_day<='${var_day}');
"
Result:
OK
Time taken: 0.762 seconds
Dropped the partition part_year=2018/part_month=06/part_day=14
Dropped the partition part_year=2018/part_month=06/part_day=15
OK
Time taken: 1.643 seconds
Dropped the partition part_year=2019/part_month=04/part_day=15
Dropped the partition part_year=2019/part_month=05/part_day=14
OK
Time taken: 1.0 seconds
Dropped the partition part_year=2019/part_month=06/part_day=09
Dropped the partition part_year=2019/part_month=06/part_day=10
Dropped the partition part_year=2019/part_month=06/part_day=11
Dropped the partition part_year=2019/part_month=06/part_day=12
Dropped the partition part_year=2019/part_month=06/part_day=13
Dropped the partition part_year=2019/part_month=06/part_day=14
OK
Time taken: 2.097 seconds

Related

Postgres partition by range from x and above condition

I am wondering how to partition a table like,
from 0 to 100
from 100 to 200
200 and above
CREATE TABLE grade_main
(
id serial not null,
g int not null
) partition by range (g);
CREATE TABLE grade_00_100 PARTITION OF grade_main FOR VALUES FROM (0) TO (100);
CREATE TABLE grade_100_200 PARTITION OF grade_main FOR VALUES FROM (100) TO (200);
-- The following returns syntax error
CREATE TABLE grade_150_all PARTITION OF grade_main FOR VALUES FROM (150);
To soecify a range that has no upper limit, use
FOR VALUES FROM (150) TO (MAXVALUE)

reset number order in db2sql

how to reset the existing data order number ? i tried alter table tb_shk_user alter column autonum restart with 1, it works only for new data insert the number will start from 1 the number but will duplicated with existing 1.
autonum column attribute
AUTONUM INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1) primary key
or this :
--Save data of your table into temporary table
create table tmptable as
(
select * from tb_shk_user
)
with data;
--Remove all rows into your table
delete from tb_shk_user ;
--Insert into your table all rows with renumber for autonum, be carefull all columns mus be in select except autonum
insert into tb_shk_user overriding system value
select rownumber() over(order by autonum) as newautonum, username, mobilno
from tmptable;
-- Reposition of autoincrement on max + 1
-- Example if 47896 is the max + 1 of autonum after preceding insert
alter table tb_shk_user alter column autonum restart with 47896;
I suppose you want restart for have a good series of id, then you cando it
--Temprary table with key autonum and new num ordered
create table tmptable as (
select autonum, rownumber() over(order by autonum) as newautonum
from tb_shk_user
) with data;
--Update your table with new num
update tb_shk_user f1 overriding system value
set f1.autonum=
(
select f2.newautonum from tmptable f2
where f1.autonum=f2.autonum
)
where exists
(
select * from tmptable f2
where f1.autonum=f2.autonum
);
-- Reposition of autoincrement on max + 1
-- Example if 47896 is the max + 1 of autonum after preceding update
alter table tb_shk_user alter column autonum restart with 47896;

How to insert table1 filed to another table2 with same table2.filed

MS SQL- 2014
I have table as temp (time, name) and Name (name)
In Name table we have data as (Ram, Shyam).
I want to insert data as Insert Into temp (time, name) Values (10:30, 'Ram') , (10:30, 'Shyam' );
But Want to automate to insert data on temp table as Insert should take time as current time and insert all the row from Name table.
Does it possible using SQL query?
You can insert values from Name table as below
INSERT INTO temp (time, name) SELECT GETDATE(), Name from Name

How to copy data of datetime column to another column in SQL

How can I copy data of all column from datetime column to another column without hour minute and second.
Table A
ID Time Time2
1 2012-08-08 00:38:59.783 NULL
After copy
ID Time Time2
1 2012-08-08 00:38:59.783 2012-08-08
update table
set time2=cast (time as date)
DEMO
create table #temp (Time datetime, Time2 date)
insert into #temp values ('2012-08-08 00:38:59.783', null)
insert into #temp values ('2012-08-05 02:30:34.123', null)
update #temp set Time2 = Time
select * from #temp
drop table #temp

Oracle - updating a sorted table

I found an old table without a primary key, and in order to add one, I have to add a new column and fill it with sequence values. I have another column which contains the time of when the record was created, so I want to insert the sequence values to the table sorted by the column with the time.
I'm not sure how to do it. I tried using PL\SQL - I created a cursor for a query that returns the table with an ORDER BY, and then update for each record the cursor returns but it didn't work.
Is there a smart working way to do this?
Thanks in advance.
Another option is just to use a correlated subquery, with the wrinkle of a nested subquery to generate the row number. Setting up some sample data:
create table t42 (datefield date);
insert into t42 (datefield) values (sysdate - 7);
insert into t42 (datefield) values (sysdate + 6);
insert into t42 (datefield) values (sysdate - 5);
insert into t42 (datefield) values (sysdate + 4);
insert into t42 (datefield) values (sysdate - 3);
insert into t42 (datefield) values (sysdate + 2);
select * from t42;
DATEFIELD
---------
12-JUL-12
25-JUL-12
14-JUL-12
23-JUL-12
16-JUL-12
21-JUL-12
Then adding and populating the new column:
alter table t42 add (id number);
update t42 t1 set t1.id = (
select rn from (
select rowid, row_number() over (order by datefield) as rn
from t42
) t2
where t2.rowid = t1.rowid
);
select * from t42 order by id;
DATEFIELD ID
--------- ----------
12-JUL-12 1
14-JUL-12 2
16-JUL-12 3
21-JUL-12 4
23-JUL-12 5
25-JUL-12 6
Since this is a synthetic key, making it match the order of another column seems a bit pointless, but I guess doesn't do any harm.
To complete the task:
alter table t42 modify id not null;
alter table t42 add constraint t42_pk primary key (id);
First of all, create new field and allow null values.
Then, update field from other table or query. Best approach is to use merge statement.
Here a sample from documentation:
MERGE INTO bonuses D
USING (SELECT employee_id, salary, department_id FROM employees
WHERE department_id = 80) S
ON (D.employee_id = S.employee_id)
WHEN MATCHED THEN UPDATE SET D.bonus = D.bonus + S.salary*.01
DELETE WHERE (S.salary > 8000)
WHEN NOT MATCHED THEN INSERT (D.employee_id, D.bonus)
VALUES (S.employee_id, S.salary*.01)
WHERE (S.salary <= 8000);
Finally, set as non null this new field and promote it to primary key.
Here sample sentences:
ALTER TABLE
customer
MODIFY
(
your_new_field varchar2(100) not null
)
;
ALTER TABLE
customer
ADD CONSTRAINT customer_pk PRIMARY KEY (your_new_field)
;
One simple way is to create a new table, with new column an all other columns:
create table newt (
newtID int primary key not null,
. . .
)
Then insert all the old data into it:
insert into newt
select row_number() over (order by <CreatedAt>), t.*
from t
(You can substitute all the columns in, instead of using "*". Having the columns by name is the better practice. This is shorter, plus, I don't know the column names.)
If you alter the table to add the column, then the column will appear at the end. I find that quite awkward for the primary key. If you do that, though, you can update it as:
with t as (select row_number() over (order by <CreatedAt>) as seqnum, t.*
from t
)
update t
set newtID = seqnum