what is the seq counter after all master restart - seaweedfs

I just saw the implementation of compactmap,
Suppose that the SEQ I used is memory and IDX is memory. If all the masters are down at the same time, seq should start from 0.
In this way, when I get the FID from the master, it will be the same as before, Will this cause the previous file to be overwritten?

No. The sequence number will be the maximum of all sequence number in all known volumes.
The (volumeId,sequence_number) pair is unique.

Related

Spring Batch Meta Data Schema Sequences

This is more of a request for a quick explanation of the sequences used to generate ID from the Spring Batch tables that store Job and Step information.
I've ran the below sequences in DB2 for Spring Boot + Batch application:
CREATE SEQUENCE AR_REPORT_ETL_STEP_EXECUTION_SEQ AS BIGINT MAXVALUE 9223372036854775807 NO CYCLE;
CREATE SEQUENCE AR_REPORT_ETL_JOB_EXECUTION_SEQ AS BIGINT MAXVALUE 9223372036854775807 NO CYCLE;
CREATE SEQUENCE AR_REPORT_ETL_JOB_SEQ AS BIGINT MAXVALUE 9223372036854775807 NO CYCLE;
When the Spring Batch job is running, each ID field is being incremented by 20 on each new record. Though this isn't a major issue, it's still slightly confusing as to why.
I had removed the sequences and added them again with INCREMENT BY 1. This is now incremented every second record by 1 and the other record by 20.
Any tips or explanation would be a great learning opportunity.
For performance reasons, Db2 for Linux Unix Windows by default will preallocate 20 numbers of a sequence and keep them in memory for faster access.
If you don't want that caching behaviour and can tolerate the overhead of allocating without caching, then you can use the NO CACHE option when defining the sequence. But be aware that without caching, Db2 must do synchronous transaction-log-write for each number to be allocated from the sequence, which is usually undesirable in high frequency insert situations.
Remember to explicitly activate the database (i.e do not depend on auto-activation), as unused pre-allocated cached sequence numbers get discarded when the database deactivates.
Example no cache syntax:
CREATE SEQUENCE AR_REPORT_ETL_STEP_EXECUTION_SEQ
AS BIGINT MAXVALUE 9223372036854775807 NO CACHE NO CYCLE;
You can read more details in the documentation.

Unique partition key or grouping partition key for bulk analysis

I want to see every value for every user in a time period.
Should my primary key be (user_id, timestamp) such that I hit every node and let the cluster key window things down.
Or should my primary key be (day_of_year, timestamp) such that my partition key finds a subset of nodes, and I use the timestamp cluster key to achieve more fine grained control of the time period.
You need to estimate the data that a partition created by (day_of_year, timestamp) will have. If the partition size goes over 100 MB, then this might bring you problems in the future during repairs. So, if you are at risk of going over 100 MB for partitions like that, then you should go for (user_id, timestamp). It will also distribute effort throughout the nodes instead of concentrating on just one.
To get an idea of the partition size you can run nodetool cfstats. In the output, check the value for Compacted partition maximum bytes. It's not guaranteed to be the largest partition everywhere, but it will tell you the largest partition size that was compacted on the node where you are running the command.

How to do a fuzzy scan in Redis?

Trying to obtain all values from Redis.
The key is:
key: mission:step:{missionId}:{yyMMdd} field:{userId} value:{progress}
I would like to get three columns: mission_id - user_id - progress to and put them to Hive/MySQL. The values of mission_id are different everyday.
How to achieve this? Any help is appreciated.
You can, but it will be a slow operation. Redis KEYS command lists all keys matched by a regex pattern.
But it is a O(N) operation as it scans each key in redis to match it against the pattern.
You can use the SCAN command which is more safe than keys in production environment.
Since these commands allow for incremental iteration, returning only a small number of elements per call, they can be used in production without the downside of commands like KEYS or SMEMBERS that may block the server for a long time (even several seconds) when called against big collections of keys or elements.
You can use SCAN 0 MATCH your-key-pattern to do loop query, and every next query's cursor is in the this time's reply, util the reply cursor is 0.
SCAN is a cursor based iterator. This means that at every call of the command, the server returns an updated cursor that the user needs to use as the cursor argument in the next call.
An iteration starts when the cursor is set to 0, and terminates when the cursor returned by the server is 0.

Why does Redshift need to do a full table scan to find the max value of the DIST/SORT key?

I'm doing simple tests on Redshift to try and speed up the insertion of data into a Redshift table. One thing I noticed today is that doing something like this
CREATE TABLE a (x int) DISTSTYLE key DISTKEY (x) SORTKEY (x);
INSERT INTO a (x) VALUES (1), (2), (3), (4);
VACUUM a; ANALYZE a;
EXPLAIN SELECT MAX(x) FROM a;
yields
QUERY PLAN
XN Aggregate (cost=0.05..0.05 rows=1 width=4)
-> XN Seq Scan on a (cost=0.00..0.04 rows=4 width=4)
I know this is only 4 rows, but it still shouldn't be doing a full table scan to find the max value of a pre-sorted column. Isn't that metadata included in the work done by ANALYZE?
And just as a sanity check, the EXPLAIN for SELECT x FROM a WHERE x > 3 only scans 2 rows instead of the whole table.
Edit: I inserted 1,000,000 more rows into the table with random values from 1 to 10,000. Did a vacuum and analyze. The query plan still says it has to scan all 1,000,004 rows.
Analyzing query plans in a tiny data set does not yield any practical insight on how the database would perform a query.
The optimizer has thresholds and when the cost difference between different plans is small enough it stops considering alternative plans. The idea is that for simple queries, the time spent searching for the "perfect" execution plan, can possibly exceed the total execution time of a less optimal plan.
Redshift has been developed on the code for ParAccel DB. ParAccel has literally hundreds of parameters that can be changed/adjusted to optimize the database for different workloads/situations.
Since Redshift is a "managed" offering, it has these settings preset at levels deemed optimal by Amazon engineers given an "expected" workload.
In general, Redshift and ParAccel are not that great for single slice queries. These queries tend to be run in all slices anyway, even if they are only going to find data in a single slice.
Once a query is executing in a slice, the minimum amount of data read is a block. Depending on block size this can mean hundreds of thousand rows.
Remember, Redshift does not have indexes. So you are not going to have a simple record lookup that will read a few entries off an index and then go laser focused on a single page on the disk. It will always read at least an entire block for that table, and it will do that in every slice.
How to have a meaningful data set to be able to evaluate a query plan?
The short answer is that your table would have a "large number" of data blocks per slice.
How many blocks is per slice is my table going to require? The answer depends on several factors:
Number of nodes in your cluster
Type of node in the cluster - Number of slices per node
Data Type - How many bytes each value requires.
The type of compression encoding for the column involved in the
query. The optimal encoding depends on data demographics
So let's start at the top.
Redshift is an MPP Database, where processing is spread accross multiple nodes. See Redshift's architecture here.
Each node is further sub-divided in slices, which are dedicated data partitions and corresponding hardware resources to process queries on that partition of the data.
When a table is created in Redshift, and data is inserted, Redshift will allocate a minimum of one block per slice.
Here is a simple example:
If you created a cluster with two ds1.8xlarge nodes, you would have 16 slices per node times two nodes for a total of 32 slices.
Let's say we are querying and column in the WHERE clause is something like "ITEM_COUNT" an integer. An integer consumes 4 bytes.
Redshift uses a block size of 1MB.
So in this scenario, your ITEM_COUNT column would have available to it a minimum of 32 blocks times block size of 1MB which would equate to 32MB of storage.
If you have 32MB of storage and each entry only consumes 4 bytes, you can have more than 8 million entries, and they could all fit inside of a single block.
In this example in the Amazon Redshift documentation they load close to 40 million rows to evaluate and compare different encoding techniques. Read it here.
But wait.....
There is compression, if you have a 75% compression rate, that would mean that even 32 million records would still be able to fit into that single block.
What is the bottom line?
In order to analyze your query plan you would need tables, columns that have several blocks. In our example above 32 milion rows would still be a single block.
This means that in the configuration above, with all the assumptions, a table with a single record would basically most likely have the same query plan as a table with 32 million records, because, in both cases the database only needs to read a single block per slice.
If you want to understand how your data is distributed across slices and how many blocks are being used you can use the queries below:
How many rows per slice:
Select trim(name) as table_name, id, slice, sorted_rows, rows
from stv_tbl_perm
where name like '<<your-tablename>>'
order by slice;
How to count how many blocks:
select trim(name) as table_name, col, b.slice, b.num_values, count(b.slice)
from stv_tbl_perm a, stv_blocklist b
where a.id = b.tbl
and a.slice = b.slice
and name like '<<your-tablename>>'
group by 1,2,3,4
order by col, slice;

Long UPDATE in postgresql

I have been running an UPDATE on a table containing 250 million rows with 3 index'; this UPDATE uses another table containing 30 million rows. It has been running for about 36 hours now. I am wondering if their is a way to find out how close it is to being done for if it plans to take a million days to do its thing, I will kill it; yet if it only needs another day or two, I will let it run. Here is the command-query:
UPDATE pagelinks SET pl_to = page_id
FROM page
WHERE
(pl_namespace, pl_title) = (page_namespace, page_title)
AND
page_is_redirect = 0
;
The EXPLAIN is not the issue here and I only mention the big table's having multiple indexes in order to somewhat justify how long it takes to UPDATE it. But here is the EXPLAIN anyway:
Merge Join (cost=127710692.21..135714045.43 rows=452882848 width=57)
Merge Cond: (("outer".page_namespace = "inner".pl_namespace) AND ("outer"."?column4?" = "inner"."?column5?"))
-> Sort (cost=3193335.39..3219544.38 rows=10483593 width=41)
Sort Key: page.page_namespace, (page.page_title)::text
-> Seq Scan on page (cost=0.00..439678.01 rows=10483593 width=41)
Filter: (page_is_redirect = 0::numeric)
-> Sort (cost=124517356.82..125285665.74 rows=307323566 width=46)
Sort Key: pagelinks.pl_namespace, (pagelinks.pl_title)::text"
-> Seq Scan on pagelinks (cost=0.00..6169460.66 rows=307323566 width=46)
Now I also sent a parallel query-command in order to DROP one of pagelinks' indexes; of course it is waiting for the UPDATE to finish (but I felt like trying it anyway!). Hence, I cannot SELECT anything from pagelinks for fear of corrupting the data (unless you think it would be safe to kill the DROP INDEX postmaster process?).
So I am wondering if their is a table that would keep track of the amount of dead tuples or something for It would be nice to know how fast or how far the UPDATE is in the completion of its task.
Thx
(PostgreSQL is not as intelligent as I thought; it needs heuristics)
Did you read the PostgreSQL documentation for "Using EXPLAIN", to interpret the output you're showing?
I'm not a regular PostgreSQL user, but I just read that doc, and then compared to the EXPLAIN output you're showing. Your UPDATE query seems to be using no indexes, and it's forced to do table-scans to sort both page and pagelinks. The sort is no doubt large enough to need temporary disk files, which I think are created under your temp_tablespace.
Then I see the estimated database pages read. The top-level of that EXPLAIN output says (cost=127710692.21..135714045.43). The units here are in disk I/O accesses. So it's going to access the disk over 135 million times to do this UPDATE.
Note that even 10,000rpm disks with 5ms seek time can achieve at best 200 I/O operations per second under optimal conditions. This would mean that your UPDATE would take 188 hours (7.8 days) of disk I/O, even if you could sustain saturated disk I/O for that period (i.e. continuous reads/writes with no breaks). This is impossible, and I'd expect the actual throughput to be off by at least an order of magnitude, especially since you have no doubt been using this server for all sorts of other work in the meantime. So I'd guess you're only a fraction of the way through your UPDATE.
If it were me, I would have killed this query on the first day, and found another way of performing the UPDATE that made better use of indexes and didn't require on-disk sorting. You probably can't do it in a single SQL statement.
As for your DROP INDEX, I would guess it's simply blocking, waiting for exclusive access to the table, and while it's in this state I think you can probably kill it.
This is very old, but if you want a way for you to monitore your update... Remember that sequences are affected globally, so you just can create one to monitore this update in another session by doing this:
create sequence yourprogress;
UPDATE pagelinks SET pl_to = page_id
FROM page
WHERE
(pl_namespace, pl_title) = (page_namespace, page_title)
AND
page_is_redirect = 0 AND NEXTVAL('yourprogress')!=0;
Then in another session just do this (don't worry about transactions, as sequences are affected globally):
select last_value from yourprogress;
This will show how many lines are being affected, so you can estimate how long you will take.
At just end restart your sequence to do another try:
alter sequence yourprogress restart with 1;
Or just drop it:
drop sequence yourprogress;
You need indexes or, as Bill pointed out, it will need to do sequential scans on all the tables.
CREATE INDEX page_ns_title_idx on page(page_namespace, page_title);
CREATE INDEX pl_ns_title_idx on pagelink(pl_namespace, pl_title);
CREATE INDEX page_redir_idx on page(page_is_redirect);