Want to create new table from the findings from original table - sql

I am quite new to Postgresql.I have a Postgres table like this:
createdat
pageTitle
sessionId
text
device
deviceserial
Wed Sep 02 2020 08:55:10 GMT+0000
submit option
null
launchComponent
Android
636363636890
Wed Sep 02 2020 09:05:11
quick check
88958d89c65f4fcea56e148a5a2838cfhdhdhd
hi
Android
6625839827
Wed Sep 02 2020 08:59:10 GMT+0000
submit option
null
launchComponent
Android
636363636890
Wed Sep 02 2020 09:07:11
quick check
88958d89c65f4fcea56e148a5a2838cfhdhdhd
hi
Android
6625839827
Wed Sep 02 2020 09:01:10 GMT+0000
submit option
null
launchComponent
Android
636363636890
Wed Sep 02 2020 09:09:11
quick check
88958d89c65f4fcea56e148a5a2838cfhdhdhd
hi
Android
6625839827
Wed Sep 02 2020 09:03:10 GMT+0000
submit option
null
launchComponent
Android
636363636890
Wed Sep 02 2020 09:09:11
quick check
88958d89c65f4fcea56e148a5a2838cfhdhdhd
hi
Android
6625839828
Wed Sep 02 2020 09:03:10 GMT+0000
submit option
null
launchComponent
Android
636363636891
Wed Sep 02 2020 09:13:11
quick check
88958d89c65f4fcea56e148a5a2838cfhdhdhd
hi
Android
6625839828
Wed Sep 02 2020 09:06:10 GMT+0000
submit option
null
launchComponent
Android
636363636891
I grouped by this table by deviceserial with this command :
SELECT
deviceserial,
DATE_PART('minute', max(createdat)::timestamp - min(createdat)::timestamp) AS time_difference
FROM
devices
GROUP BY deviceserial;
Then the result is this bellow.Now I want to create a new table named "device_usage" from this columns deviceserial,device and usage values from below and another column for id.After that I want to copy "device_usage" table to another database.How can I do that?I could not find a good solution.
deviceserial
device
usage
636363636891
Android
3
636363636890
Android
8
6625839827
Android
29
6625839828
Android
4

connect to both databases
Run a CREATE TABLE statement with the appropriate column definitions on the second database. The id column is created as
id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY
Run your query on the first database and fetch the results row by row.
For each row, run an appropriate INSERT statement on the second database. That statement won't insert to id so that the column is filled with the autogenerated value.
These steps are for a solution with a program.
If you want to do it by hand (i.e., a one time task), move the data with COPY. Using psql, moving the data could look like
\connect db1
\copy (SELECT /* your query */) TO 'datafile'
\connect db2
\copy newtable (deviceserial, device, usage) FROM 'datafile'

Related

Sort String Value for Date in SQL

I have an issue as I am trying to select Date values stored in SQL server as String value with this format "Thu, 08 Jul 2021 06:08:20 -0700" and i need to select all table with newest date in first but I do not know how to convert this String into Date and sort it. Thanks in advance.
Table
|Thu, 08 Jul 2021 06:08:20 -0700|
|Fri, 09 Jul 2021 01:08:20 -0700|
|Sun, 11 Jul 2021 07:08:20 -0700|
output (Newest Date first)
|Sun, 11 Jul 2021 07:08:20 -0700|
|Fri, 09 Jul 2021 01:08:20 -0700|
|Thu, 08 Jul 2021 06:08:20 -0700|
Your date is just missing a valid timezone offset value so needs a ":" inserted so it's -07:00, you can do this with stuff and use substring to ignore the irrelevant day name. You don't state a specific database platform, for SQL Server you can then cast to a datetimeoffset, other databases have similar but slightly varied syntax. This assumes the strings are all formatted consistently of course.
declare #d varchar(30)='Thu, 08 Jul 2021 06:08:20 -0700'
select Cast(Stuff(Substring(#d,6,26),25,0,':') as datetimeoffset(0))
Result
2021-07-08 06:08:20 -07:00

What does the "+"-sign in the duration column of the last command in OS X terminal mean?

I am running last to check my kids computer usage and I am getting output like this:
Ruben console Wed Mar 22 09:17 - crash (02:17)
Ruben console Tue Mar 14 15:56 - crash (6+06:52)
Ruben console Sat Mar 4 12:08 - crash (1+04:48)
Ruben console Tue Feb 28 16:24 - crash (3+01:48)
Ruben console Mon Feb 13 06:47 - crash (10:39)
Ruben console Tue Jan 31 16:27 - crash (01:16)
If I understand things correctly, the first line describes a 2 minutes, 17 seconds session. How do I read the following, though? What does (6+06:52) mean?

Duplicity incremental backup taking too long

I have duplicity running an incremental daily backup to S3. About 37 GiB.
On the first month or so, it went ok. It used to finish in about 1 hour. Then it started taking too long to complete the task. Right now, as I type, it is still running the daily backup that started 7 hours ago.
I'm running two commands, first to backup and then cleanup:
duplicity --full-if-older-than 1M LOCAL.SOURCE S3.DEST --volsize 666 --verbosity 8
duplicity remove-older-than 2M S3.DEST
The logs
Temp has 54774476800 available, backup will use approx 907857100.
So the temp has enough space, good. Then it starts with this...
Copying duplicity-full-signatures.20161107T090303Z.sigtar.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-13tylb-2
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-NanCxQ-3
[...]
Copying duplicity-inc.20161110T095624Z.to.20161111T103456Z.manifest.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-VQU2zx-30
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-4Idklo-31
[...]
This continues for each day till today, taking long minutes for each file. And continues with this...
Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Mon Nov 7 09:03:03 2016)
Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Wed Nov 9 18:09:07 2016)
Added incremental Backupset (start_time: Thu Nov 10 09:56:24 2016 / end_time: Fri Nov 11 10:34:56 2016)
After a long time...
Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
Collection Status
-----------------
Connecting with backend: BackendWrapper
Archive dir: /home/user/.cache/duplicity/700b5f90ee4a620e649334f96747bd08
Found 6 secondary backup chains.
Secondary chain 1 of 6:
-------------------------
Chain start time: Mon Nov 7 09:03:03 2016
Chain end time: Mon Nov 7 09:03:03 2016
Number of contained backup sets: 1
Total number of contained volumes: 2
Type of backup set: Time: Num volumes:
Full Mon Nov 7 09:03:03 2016 2
-------------------------
Secondary chain 2 of 6:
-------------------------
Chain start time: Wed Nov 9 18:09:07 2016
Chain end time: Wed Nov 9 18:09:07 2016
Number of contained backup sets: 1
Total number of contained volumes: 11
Type of backup set: Time: Num volumes:
Full Wed Nov 9 18:09:07 2016 11
-------------------------
Secondary chain 3 of 6:
-------------------------
Chain start time: Thu Nov 10 09:56:24 2016
Chain end time: Sat Dec 10 09:44:31 2016
Number of contained backup sets: 31
Total number of contained volumes: 41
Type of backup set: Time: Num volumes:
Full Thu Nov 10 09:56:24 2016 11
Incremental Fri Nov 11 10:34:56 2016 1
Incremental Sat Nov 12 09:59:47 2016 1
Incremental Sun Nov 13 09:57:15 2016 1
Incremental Mon Nov 14 09:48:31 2016 1
[...]
After listing all chains:
Also found 0 backup sets not part of any chain, and 1 incomplete backup set.
These may be deleted by running duplicity with the "cleanup" command.
This was only the backup part. It takes hours doing this and only 10 minutes to upload 37 GiB to S3.
ElapsedTime 639.59 (10 minutes 39.59 seconds)
SourceFiles 288
SourceFileSize 40370795351 (37.6 GB)
Then comes the cleanup, that gives me this:
Cleaning up
Local and Remote metadata are synchronized, no sync needed.
Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
There are backup set(s) at time(s):
Tue Jan 10 09:58:05 2017
Wed Jan 11 09:54:03 2017
Thu Jan 12 09:56:42 2017
Fri Jan 13 10:05:05 2017
Sat Jan 14 10:24:54 2017
Sun Jan 15 09:49:31 2017
Mon Jan 16 09:39:41 2017
Tue Jan 17 09:59:05 2017
Wed Jan 18 09:59:56 2017
Thu Jan 19 10:01:51 2017
Fri Jan 20 09:35:30 2017
Sat Jan 21 09:53:26 2017
Sun Jan 22 09:48:57 2017
Mon Jan 23 09:38:45 2017
Tue Jan 24 09:54:29 2017
Which can't be deleted because newer sets depend on them.
Found old backup chains at the following times:
Mon Nov 7 09:03:03 2016
Wed Nov 9 18:09:07 2016
Sat Dec 10 09:44:31 2016
Mon Jan 9 10:04:51 2017
Rerun command with --force option to actually delete.
I found the problem. Because of an issue, I followed this answer, and added this code to my script:
rm -rf ~/.cache/deja-dup/*
rm -rf ~/.cache/duplicity/*
This is supposed to be a one time thing because of random bug duplicity had. But the answer didn't mention this. So every day the script was removing the cache just after syncing the cache, and then, on the next day it had to download the whole thing again.

ZAP Attack proxy History Request ID is not consecutive

I've used ZAP to intercept traffic .
Works nicely and I have a history for my REQUEST - RESPONSE pairs like this:
ID Req. TimeStamp Method etc ..
...
1955 Tue Apr 05 15:42:47 CEST 2016 GET https ://...
1971 Tue Apr 05 15:42:49 CEST 2016 GET https ://...
1984 Tue Apr 05 15:43:30 CEST 2016 GET https ://...
1998 Tue Apr 05 15:43:31 CEST 2016 GET https ://...
...
How come the IDs are not consecutive ?
We have a FAQ for that :) https://github.com/zaproxy/zaproxy/wiki/FAQhistoryIdsMissing
Simon (ZAP Project Lead)

Extract Date from varchar field in SQL Server 2008

I have a table which has only 1 large column named Details; each record looks similar to this:
Record#1: ...ID: <klsdhf89435> Date: 1 Jun 2011 12:28:14 From: "Yahoo"...
Record#2: ...Subject: test Date: Fri, 24 May 2010 13:18:43 -0500 ID: <7532432423>...
Record#3: ...ID: <234321fd> Date: 14 Jun 2010 12:28:14 From: "Gmail"...
Record#4: ...ID: <12434> Date: 1 Jun 2011 12:28:14 From: "Yahoo"...
I would like the subtract the Date only. So, for those 4 records, I would like to extract:
1 Jun 2011 12:28:14
Fri, 24 May 2010 13:18:43 -0500
14 Jun 2010 12:28:14
1 Jun 2011 12:28:14
Please note that the double space from before "From" or before "ID" is a new line character which is Char(10) in SQL Server.
Thanks in advance
SELECT SUBSTR(Details, LOCATE('Date: ', Details), LOCATE(' From:', Details) - LOCATE('Date: ', Details))
FROM TABLENAME
Consider parsing the date into a new date column when the row is inserted.
http://dev.mysql.com/doc/refman/5.0/en/string-functions.html