I've used ZAP to intercept traffic .
Works nicely and I have a history for my REQUEST - RESPONSE pairs like this:
ID Req. TimeStamp Method etc ..
...
1955 Tue Apr 05 15:42:47 CEST 2016 GET https ://...
1971 Tue Apr 05 15:42:49 CEST 2016 GET https ://...
1984 Tue Apr 05 15:43:30 CEST 2016 GET https ://...
1998 Tue Apr 05 15:43:31 CEST 2016 GET https ://...
...
How come the IDs are not consecutive ?
We have a FAQ for that :) https://github.com/zaproxy/zaproxy/wiki/FAQhistoryIdsMissing
Simon (ZAP Project Lead)
Related
I couldn't briefly explain the problem so I'll try to explain it this way. Let's say I have a table similar to the one below.
How do I get the total number of days in October per student that that student has at least 1 book checked out?
Please note that a single student can check out more than 1 book at a time which cause the overlapping dates.
Student
Book
Date_Borrowed
Date_Returned
David
A Thousand Splendid Suns
01 Oct 2021
05 Oct 2021
David
Jane Eyre
09 Oct 2021
13 Oct 2021
David
Please Look After Mom
21 Oct 2021
29 Oct 2021
Fiona
Sense and Sensibility
05 Oct 2021
14 Oct 2021
Fiona
The Girl Who Saved the King of Sweden
05 Oct 2021
14 Oct 2021
Fiona
A Fort of Nine Towers
02 Oct 2021
17 Oct 2021
Fiona
One Hundred Years of Solitude
20 Oct 2021
30 Oct 2021
Fiona
The Unbearable Lightness of Being
20 Oct 2021
30 Oct 2021
Greg
Fahrenheit 451
06 Oct 2021
11 Oct 2021
Greg
One Hundred Years of Solitude
10 Oct 2021
17 Oct 2021
Greg
Please Look After Mom
15 Oct 2021
21 Oct 2021
Greg
4 3 2 1
20 Oct 2021
27 Oct 2021
Greg
The Girl Who Saved the King of Sweden
27 Oct 2021
03 Nov 2021
Marcus
Fahrenheit 451
01 Oct 2021
04 Oct 2021
Marcus
Nectar in a Sieve
15 Oct 2021
15 Oct 2021
Marcus
Please Look After Mom
30 Oct 2021
31 Oct 2021
Priya
Like Water for Chocolate
02 Oct 2021
21 Oct 2021
Priya
Fahrenheit 451
21 Oct 2021
22 Oct 2021
Sasha
Baudolino
03 Oct 2021
29 Oct 2021
Sasha
A Thousand Splendid Suns
07 Oct 2021
16 Oct 2021
Sasha
A Fort of Nine Towers
26 Oct 2021
01 Nov 2021
Thanks in advance!
Using the data step, you can expand each date into a long format. From there, you can use SQL to do a simple count by student after removing overlapping dates.
data foo;
set have;
do date = date_borrowed to date_returned;
output;
end;
keep student date;
format date date9.;
run;
This gets us a long table of all the dates with at least one book checked out for each student.
student date
David 01OCT2021
David 02OCT2021
David 03OCT2021
David 04OCT2021
David 05OCT2021
David 09OCT2021
...
Now we need to remove the overlapping dates.
proc sort data=foo nodupkey;
by student date;
run;
From here, we can do a simple SQL count per student.
proc sql noprint;
create table want as
select student
, intnx('month', date, 0, 'B') as month format=monyy7.
, count(*) as days_checked_out
from foo
where calculated month = '01OCT2021'd
group by student, calculated month
;
quit;
Output:
student month days_checked_out
David OCT2021 19
Fiona OCT2021 27
Greg OCT2021 26
Marcus OCT2021 7
Priya OCT2021 21
Sasha OCT2021 29
An easy way is to make a temporary array with one variable for each day in the time period you want to count. Then just use a do loop to set the variables representing those days to 1. When you have reached the last record for a student then take the sum to find the number of days covered.
First let's convert your posted table into a dataset.
data have;
infile cards dsd dlm='|' truncover;
input Student :$20. Book :$100. (Date_Borrowed Date_Returned) (:date.);
format Date_Borrowed Date_Returned date11.;
cards;
David|A Thousand Splendid Suns|01 Oct 2021|05 Oct 2021
David|Jane Eyre|09 Oct 2021|13 Oct 2021
David|Please Look After Mom|21 Oct 2021|29 Oct 2021
Fiona|Sense and Sensibility|05 Oct 2021|14 Oct 2021
Fiona|The Girl Who Saved the King of Sweden|05 Oct 2021|14 Oct 2021
Fiona|A Fort of Nine Towers|02 Oct 2021|17 Oct 2021
Fiona|One Hundred Years of Solitude|20 Oct 2021|30 Oct 2021
Fiona|The Unbearable Lightness of Being|20 Oct 2021|30 Oct 2021
Greg|Fahrenheit 451|06 Oct 2021|11 Oct 2021
Greg|One Hundred Years of Solitude|10 Oct 2021|17 Oct 2021
Greg|Please Look After Mom|15 Oct 2021|21 Oct 2021
Greg|4 3 2 1|20 Oct 2021|27 Oct 2021
Greg|The Girl Who Saved the King of Sweden|27 Oct 2021|03 Nov 2021
Marcus|Fahrenheit 451|01 Oct 2021|04 Oct 2021
Marcus|Nectar in a Sieve|15 Oct 2021|15 Oct 2021
Marcus|Please Look After Mom|30 Oct 2021|31 Oct 2021
Priya|Like Water for Chocolate|02 Oct 2021|21 Oct 2021
Priya|Fahrenheit 451|21 Oct 2021|22 Oct 2021
Sasha|Baudolino|03 Oct 2021|29 Oct 2021
Sasha|A Thousand Splendid Suns|07 Oct 2021|16 Oct 2021
Sasha|A Fort of Nine Towers|26 Oct 2021|01 Nov 2021
;
Now we can use BY group processing in a data step to aggregate per student. We can set the upper and lower index for the array to be the values SAS uses to represent those days. Temporary arrays are automatically retained across observations, we just need to clear it out when we start a new student.
The SAS compiler does not expect to see a date literal as the index boundaries for an array so we can use %SYSEVALF() to convert the date literal to the integer it represents.
data want;
set have;
by student ;
array october [%sysevalf('01oct2021'd):%sysevalf('31oct2021'd)] _temporary_ ;
if first.student then call missing(of october[*]);
do date=max(date_borrowed,'01oct2021'd) to min(date_returned,'31oct2021'd);
october[date]=1;
end;
if last.student;
days = sum(0, of october[*]);
keep student days;
run;
Results:
Obs Student days
1 David 19
2 Fiona 27
3 Greg 26
4 Marcus 7
5 Priya 21
6 Sasha 29
You could also modify it slightly to not only count the number of "covered" (or unique) days, but also the total number of "book" days.
data want;
set have;
by student ;
array october [%sysevalf('01oct2021'd):%sysevalf('31oct2021'd)] _temporary_ ;
if first.student then call missing(of october[*]);
do date=max(date_borrowed,'01oct2021'd) to min(date_returned,'31oct2021'd);
october[date]=sum(october[date],1);
end;
if last.student;
unique_days = n(of october[*]);
book_days = sum(0,of october[*]);
keep student unique_days book_days;
run;
Results:
unique_ book_
Obs Student days days
1 David 19 19
2 Fiona 27 58
3 Greg 26 34
4 Marcus 7 7
5 Priya 21 22
6 Sasha 29 43
I want to get the latest values of each SIZE_TYPE day wise, ordered by TIMESTAMP. So, only 1 value of each SIZE_TYPE must be present for a given day, and that is the latest value for the day.
How do I get the desired output? I'm using PostgreSQL here.
Input
|TIMESTAMP |SIZE_TYPE|SIZE|
|----------------------------------------|---------|----|
|1595833641356 [Mon Jul 27 2020 07:07:21]|0 |541 |
|1595833641356 [Mon Jul 27 2020 07:07:21]|1 |743 |
|1595833641356 [Mon Jul 27 2020 07:07:21]|2 |912 |
|1595876841356 [Mon Jul 27 2020 19:07:21]|1 |714 |
|1595876841356 [Mon Jul 27 2020 19:07:21]|2 |987 |
|1595963241356 [Tue Jul 28 2020 19:07:21]|0 |498 |
|1595920041356 [Tue Jul 28 2020 07:07:21]|2 |974 |
|1595920041356 [Tue Jul 28 2020 07:07:21]|0 |512 |
*Note: the TIMESTAMP values are in UNIX time. I have given
the date-time string for reference*
Output
|TIMESTAMP |SIZE_TYPE|SIZE|
|----------------------------------------|---------|----|
|1595833641356 [Mon Jul 27 2020 07:07:21]|0 |541 |
|1595876841356 [Mon Jul 27 2020 19:07:21]|1 |714 |
|1595876841356 [Mon Jul 27 2020 19:07:21]|2 |987 |
|1595920041356 [Tue Jul 28 2020 07:07:21]|2 |974 |
|1595963241356 [Tue Jul 28 2020 19:07:21]|0 |498 |
*Note: the TIMESTAMP values are in UNIX time. I have given
the date-time string for reference*
Explanation
For July 27, the latest values for
0: 541 (no other entries for the day)
1: 714
2: 987
For July 28, the latest values for
0: 498
1: nothing (ignore)
2: 974 (no other entries for the day)
You can use distinct on:
select distinct on (floor(timestamp / (24 * 60 * 60 * 1000)), size_type) t.*
from input
order by floor(timestamp / (24 * 60 * 60 * 1000)), size_type,
timestamp desc;
The arithmetic is just to extract the day from the timestamp.
Here is a db<>fiddle.
I have an issue as I am trying to select Date values stored in SQL server as String value with this format "Thu, 08 Jul 2021 06:08:20 -0700" and i need to select all table with newest date in first but I do not know how to convert this String into Date and sort it. Thanks in advance.
Table
|Thu, 08 Jul 2021 06:08:20 -0700|
|Fri, 09 Jul 2021 01:08:20 -0700|
|Sun, 11 Jul 2021 07:08:20 -0700|
output (Newest Date first)
|Sun, 11 Jul 2021 07:08:20 -0700|
|Fri, 09 Jul 2021 01:08:20 -0700|
|Thu, 08 Jul 2021 06:08:20 -0700|
Your date is just missing a valid timezone offset value so needs a ":" inserted so it's -07:00, you can do this with stuff and use substring to ignore the irrelevant day name. You don't state a specific database platform, for SQL Server you can then cast to a datetimeoffset, other databases have similar but slightly varied syntax. This assumes the strings are all formatted consistently of course.
declare #d varchar(30)='Thu, 08 Jul 2021 06:08:20 -0700'
select Cast(Stuff(Substring(#d,6,26),25,0,':') as datetimeoffset(0))
Result
2021-07-08 06:08:20 -07:00
I am quite new to Postgresql.I have a Postgres table like this:
createdat
pageTitle
sessionId
text
device
deviceserial
Wed Sep 02 2020 08:55:10 GMT+0000
submit option
null
launchComponent
Android
636363636890
Wed Sep 02 2020 09:05:11
quick check
88958d89c65f4fcea56e148a5a2838cfhdhdhd
hi
Android
6625839827
Wed Sep 02 2020 08:59:10 GMT+0000
submit option
null
launchComponent
Android
636363636890
Wed Sep 02 2020 09:07:11
quick check
88958d89c65f4fcea56e148a5a2838cfhdhdhd
hi
Android
6625839827
Wed Sep 02 2020 09:01:10 GMT+0000
submit option
null
launchComponent
Android
636363636890
Wed Sep 02 2020 09:09:11
quick check
88958d89c65f4fcea56e148a5a2838cfhdhdhd
hi
Android
6625839827
Wed Sep 02 2020 09:03:10 GMT+0000
submit option
null
launchComponent
Android
636363636890
Wed Sep 02 2020 09:09:11
quick check
88958d89c65f4fcea56e148a5a2838cfhdhdhd
hi
Android
6625839828
Wed Sep 02 2020 09:03:10 GMT+0000
submit option
null
launchComponent
Android
636363636891
Wed Sep 02 2020 09:13:11
quick check
88958d89c65f4fcea56e148a5a2838cfhdhdhd
hi
Android
6625839828
Wed Sep 02 2020 09:06:10 GMT+0000
submit option
null
launchComponent
Android
636363636891
I grouped by this table by deviceserial with this command :
SELECT
deviceserial,
DATE_PART('minute', max(createdat)::timestamp - min(createdat)::timestamp) AS time_difference
FROM
devices
GROUP BY deviceserial;
Then the result is this bellow.Now I want to create a new table named "device_usage" from this columns deviceserial,device and usage values from below and another column for id.After that I want to copy "device_usage" table to another database.How can I do that?I could not find a good solution.
deviceserial
device
usage
636363636891
Android
3
636363636890
Android
8
6625839827
Android
29
6625839828
Android
4
connect to both databases
Run a CREATE TABLE statement with the appropriate column definitions on the second database. The id column is created as
id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY
Run your query on the first database and fetch the results row by row.
For each row, run an appropriate INSERT statement on the second database. That statement won't insert to id so that the column is filled with the autogenerated value.
These steps are for a solution with a program.
If you want to do it by hand (i.e., a one time task), move the data with COPY. Using psql, moving the data could look like
\connect db1
\copy (SELECT /* your query */) TO 'datafile'
\connect db2
\copy newtable (deviceserial, device, usage) FROM 'datafile'
I have duplicity running an incremental daily backup to S3. About 37 GiB.
On the first month or so, it went ok. It used to finish in about 1 hour. Then it started taking too long to complete the task. Right now, as I type, it is still running the daily backup that started 7 hours ago.
I'm running two commands, first to backup and then cleanup:
duplicity --full-if-older-than 1M LOCAL.SOURCE S3.DEST --volsize 666 --verbosity 8
duplicity remove-older-than 2M S3.DEST
The logs
Temp has 54774476800 available, backup will use approx 907857100.
So the temp has enough space, good. Then it starts with this...
Copying duplicity-full-signatures.20161107T090303Z.sigtar.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-13tylb-2
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-NanCxQ-3
[...]
Copying duplicity-inc.20161110T095624Z.to.20161111T103456Z.manifest.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-VQU2zx-30
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-4Idklo-31
[...]
This continues for each day till today, taking long minutes for each file. And continues with this...
Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Mon Nov 7 09:03:03 2016)
Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Wed Nov 9 18:09:07 2016)
Added incremental Backupset (start_time: Thu Nov 10 09:56:24 2016 / end_time: Fri Nov 11 10:34:56 2016)
After a long time...
Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
Collection Status
-----------------
Connecting with backend: BackendWrapper
Archive dir: /home/user/.cache/duplicity/700b5f90ee4a620e649334f96747bd08
Found 6 secondary backup chains.
Secondary chain 1 of 6:
-------------------------
Chain start time: Mon Nov 7 09:03:03 2016
Chain end time: Mon Nov 7 09:03:03 2016
Number of contained backup sets: 1
Total number of contained volumes: 2
Type of backup set: Time: Num volumes:
Full Mon Nov 7 09:03:03 2016 2
-------------------------
Secondary chain 2 of 6:
-------------------------
Chain start time: Wed Nov 9 18:09:07 2016
Chain end time: Wed Nov 9 18:09:07 2016
Number of contained backup sets: 1
Total number of contained volumes: 11
Type of backup set: Time: Num volumes:
Full Wed Nov 9 18:09:07 2016 11
-------------------------
Secondary chain 3 of 6:
-------------------------
Chain start time: Thu Nov 10 09:56:24 2016
Chain end time: Sat Dec 10 09:44:31 2016
Number of contained backup sets: 31
Total number of contained volumes: 41
Type of backup set: Time: Num volumes:
Full Thu Nov 10 09:56:24 2016 11
Incremental Fri Nov 11 10:34:56 2016 1
Incremental Sat Nov 12 09:59:47 2016 1
Incremental Sun Nov 13 09:57:15 2016 1
Incremental Mon Nov 14 09:48:31 2016 1
[...]
After listing all chains:
Also found 0 backup sets not part of any chain, and 1 incomplete backup set.
These may be deleted by running duplicity with the "cleanup" command.
This was only the backup part. It takes hours doing this and only 10 minutes to upload 37 GiB to S3.
ElapsedTime 639.59 (10 minutes 39.59 seconds)
SourceFiles 288
SourceFileSize 40370795351 (37.6 GB)
Then comes the cleanup, that gives me this:
Cleaning up
Local and Remote metadata are synchronized, no sync needed.
Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
There are backup set(s) at time(s):
Tue Jan 10 09:58:05 2017
Wed Jan 11 09:54:03 2017
Thu Jan 12 09:56:42 2017
Fri Jan 13 10:05:05 2017
Sat Jan 14 10:24:54 2017
Sun Jan 15 09:49:31 2017
Mon Jan 16 09:39:41 2017
Tue Jan 17 09:59:05 2017
Wed Jan 18 09:59:56 2017
Thu Jan 19 10:01:51 2017
Fri Jan 20 09:35:30 2017
Sat Jan 21 09:53:26 2017
Sun Jan 22 09:48:57 2017
Mon Jan 23 09:38:45 2017
Tue Jan 24 09:54:29 2017
Which can't be deleted because newer sets depend on them.
Found old backup chains at the following times:
Mon Nov 7 09:03:03 2016
Wed Nov 9 18:09:07 2016
Sat Dec 10 09:44:31 2016
Mon Jan 9 10:04:51 2017
Rerun command with --force option to actually delete.
I found the problem. Because of an issue, I followed this answer, and added this code to my script:
rm -rf ~/.cache/deja-dup/*
rm -rf ~/.cache/duplicity/*
This is supposed to be a one time thing because of random bug duplicity had. But the answer didn't mention this. So every day the script was removing the cache just after syncing the cache, and then, on the next day it had to download the whole thing again.