Where to insert in a linked list and why? - dll

Applying open hashing the linked list inserts a new node at the front or at the back, but what's preferable and why, or doesn't it matter?
The principle of open hashing is that the useful values of the array as part of the hash table are singly linked lists (= SLL) (&)
Each node of the SLL consists of a key value pair and refers to the next node or null.
When adding a new node, this can be done at the very front, i.e. at the 'connection' of the SLL to the array, or at the very back, i.e. at the tail of the SLL (for the completeness: somewhere randomly in between is possible)
On some websites and in some books, one illustrates the addition at open hashing by adding AT THE BACK. Other websites and books add AT THE FRONT.
My question: what is desirable? What is most commonly used? Or doesn't that play any role? How does, what is described here, happen in Java? And in C#?
Illustrations:at the front: https://www.cs.usfca.edu/~galles/visualization/OpenHash.html and
at the back: https://visualgo.net/en/hashtable
(&) or is the use of a doubly linked list mandatory / advisable?
I apologize if this question was asked already but after searching I found explanations for the fact that the linked list is growing but no one explains exactly how in the sense of at the front or at the back and why so.

Related

Length extension attack doubts

So I've been studying this concept of length extension attacks and there are few things that I noticed during my study about it which are not very bright to me.
1.Research papers are explaining how you can append some type of data to the end and make newly formed data. For example
Desired New Data: count=10&lat=37.351&user_id=1&long=-119.827&waffle=eggo&waffle=liege
(notice 2 waffles). My question is if a parser function on the server side can track duplicate attributes, could then the entire length extension attack be nonsense? Because the server would notice duplicate attributes. Is a proper parser that is made to check any duplicates a good solution versus length extension attacks? I'm aware of HMAC approach and other protections, but specifically talking just about parsers here now.
2.Research says that only vulnerable data is H(key|message). They claim that H(message|key) won't work for the attacker because we would have to append a new key (which we obviously don't know). My question is why would we have to append a new key? We don't do it when we are attacking H(key|message). Why can't we rely on the fact that we will pass the verification test (we would create the correct hash) and that if the parser tries to extract the key from it, that it would take the only key in the block we send out and resume from there? Why would we have to send 2 keys? Why doesn't attack against H(message|key) work?
My question is if a parser function on server side can track duplicate attributes, could then the entire length extension attack be a nonsense?
You are talking about a well-written parser. Writing software is hard and writing correct software is very hard.
In that example, you have seen an overwritten attribute. Are you able to say that a good parser must take the last one or the first one? What is the rule? There can be stations that the last one must be taken! That is an attack that can be applied or not. This depends on the station. If you consider that the knowledge of the length extension attack goes back to 1990s, then finding a place applicable to this should amaze someone!. And, it is applied in the wild to Flickr API in 2009, after almost 20 years;
Flickr's API Signature Forgery by Thai Duong and Juliano Rizzo Published on Sep. 28, 2009.
My question is why would we have to append new key? We don't do it when we are attacking H(key|message). Why can't we relay on the fact that we will pass verification test (we would create correct hash) and that if parser tries to extract key from it, that it would take the only key in the block we send out and resume from there. Why would we have to send 2 keys? Why doesnt attack against H(message|key) work?
The attack is a signature forgery. The key is not known to the attacker, but they can still forge new signatures. The new message and signature - extended hash - is sent to the server, then the server takes the key and appends it to the message to execute a canonical verification, that is; if it does the signature is valid.
The parser doesn't extract the key, it already knows the key. The point is that can you make sure that the data is really extended or not. The padding rule is simple, add 1 and fill many zeroes so that the last 64 (128) is the length encoding (very simplified, for example, the final length must be multiple of 512 for SHA256). To see that there is another padding inside you must check every block and then you may claim that there is an extension attack. Yes, you can do this, however, the one of aims of cryptography is to reduce the dependencies, too. If we can create a better signature that eliminates the checking then we suggest to left the others. This enables the software developers to write more secure implementation easily.
Why doesn't attack against H(message|key) work?
Simple, you get the extended message message|extended and send the extended hash
H(message|key|extended) to the server. Then the server takes the message message|extended and appends the key message|extended|key and hashes it H(message|extended|key) and clearly this is not equal to the extended one H(message|key|extended)
Note that the trimmed version of the SHA2 series like SHA-512/256 has resistance to length extension attacks. SHA3 is immune to it by design and that enables a simple KMAC signature scheme. Blake2 is also immune since it is designed with the HAIFA construction.

Download OEIS sequences with known algorithm to produce them

I was reading some interesting questions about the topic "Can we make a program that, given a particular sequence, produces the next terms", like this one, and I really like the detailed answer of this one. I understand that the answer is "That's impossible without more restrictions", and that given some restrictions (polynomials, rational function or boolean map) we know some good algorithms, as the second answer I linked explains.
Now, a natural question is how much can we solve, trying our best even if we can't always solve it, to answer the original, general question. What I usually do when facing a hard sequence is trying to see if it's in OEIS, and if it seems to be there, seeing if there is any formula or algorithm to produce it in there. You can download a small version of OEIS with the first terms of each sequence, and you can make queries to find formulas or maple algorithms for a particular sequence. My question is, do you think it's feasible to download a small version of OEIS that includes, with the first terms, a little algorithm to produce it?
The natural problem here is that I haven't seen any link to download the entire database of OEIS with all the details, which maybe deserves its own question. Even if we had this, you need to read the formulas/algorithms (that can be written in different languages, from what I've seen) and interpret them correctly. But I thought maybe someone here knows how to solve this, in any case thanks in advance.
You could, as you note, download the sequences and their A-numbers from the link mentioned here: https://oeis.org/wiki/Welcome#Compressed_Versions
After searching that and finding one sequence (or a small number of sequences) of interest, you could scrape the respective page(s) for formulas. There are specific fields for Maple and Mathematica, which may be helpful, and otherwise, an entry in the PROGRAM field should include identifying information when it is not one of the standard languages with its own field in the database. See: http://oeis.org/wiki/Style_Sheet
Unofficially, but with the interests of the OEIS in mind, I would not recommend trying to download or scrape the OEIS in its entirety. Whether it's one person, or a whole host of people, we would certainly recommend using the compressed version of the database to identify sequences of interest by A-number first, then pulling their entire entry by scraping the site or querying the OEIS using methods that you have already mentioned: Programmatic access to On-Line Encyclopedia of Integer Sequences
If this sounds laborious, perhaps an alternative is the Wolfram Cloud, which actives this through other means. For example, you can navigate to the cloud (you may have to register just to get access) at: https://www.wolframcloud.com/
Typing in something like FindSequenceFunction[{1, 2, 3, 5, 17, 305, 34865}] will give you a formula, if Wolfram/Mathematica can find one. The documentation for FindSequenceFunction can be found here: https://reference.wolfram.com/language/ref/FindSequenceFunction.html
Wolfram/Mathematica can also invoke the OEIS using packages like the one described here: https://mathematica.stackexchange.com/questions/40/is-it-possible-to-invoke-the-oeis-from-mathematica

Dict vs Record in elm

While implementing a simple app I ran into the problem of trying to update a nested record. I found a solution online but it really seems like a whole lot of bloated code.
As I was looking for alternatives I found Dictionaries. This seem like a solution to that problem -- If I use a dictionary inside of a record I can avoid all that bloated code and get nested updates.
Seeing dictionaries and records next to each other made me wonder, why would I use a record instead of a dictionary, or vice versa? The two seem really similar to me, so I am not sure I see the advantage of one or the other. Of course I can see that there is a difference in syntax, but is that all ?
I learned somewhere that the access time complexity of Dict is O(log(n)) -- does it do a binary search on the keys ? -- but I can't find the access time complexity for record, but I am wondering if that is O(1) and that is one of the advantages.
Either way, they both seem to map to 1 single data structure in other languages (e.g Python's dictionaries, JS objects, Java hash-tables), why do we need two in elm ?
Dicts and records might seem very similar when coming from JavaScript, but in a statically typed language they are actually very different. I think just about the only property they have in common is that they are both key-value containers.
The biggest differences, I think, are that Dicts are homogeneous, meaning values must be of the same type, and "dynamically" keyed and sized, meaning keys are not statically checked (ie. at compile-time) and that key-value pairs can be added at runtime. Records on the other hand includes the key names and value types in the record type, which means they can hold values of different types, but also can't have keys added or removed at runtime without changing the type itself.
The benefits of easily being able to insert and update a Dict is something you pay for when you try to get it back out. Dict.get returns a Maybe which you'll then have to handle, because the type doesn't give any guarantee that it contains anything at all. You also won't get a compiler error if you mistype the name of a key.
Overall, a Dict forsakes most of the benefits of static typing. I think a good rule of thumb is that if you know the key names, you should most likely go with records. If you don't, go with Dict.
You also seem about right regarding performance, but I think that's a secondary concern. Record access should be equivalent to accessing the elements of an array by index, since so much information is known at compile time that it can essentially be compiled down to a fixed-size array.

Disassemble surfaces in CATIA using VBA

Is there a way to disassemble a surface in domains in CATIA through VBA, maintaining the dependencies between the initial surface and the separated domains?
I can suggest 2 options, I have already employed both of them in my works in a similar way. None of them will of course be guaranteed to update after input changes, but associativity with existing domains will exist.
Option 1:
Pick a random face in automation using Search (within topology option in the query string. To get to the right query string, first try it using manual searches with Include Topology option active).
Create two Extract with Point continuity based on this face: one will be the first domain you're looking for, the second will be in Complementary mode and the input for the next step
Repeat recursively from step 1 until all domains are extracted. Last complementary extract will probably raise an error (manage it with On Error Statement)
Option 2:
Disassemble in domains getting dumb surfaces, store them.
Create a point on surface on each of them
Create many Near, always on the same input surface, using each of the points obtained before.
If you don't like keeping relationships with dumb surfaces, insert this step after step 2: read point's coordinates using GetCoordinates method, then create another point by coordinates and use that into the Near. Then delete all dumb surfs and points created on them.
Regards

Best way to store a small key-value list in Redis

I'm trying to use Redis as a primary database for a small game I'm making (mostly to mess around with programming and using Redis).
However I came across a scenario that I couldn't find an answer to:
I wish to store a list of the names of different maps that people can be on (not many of them) along with their id. Note: I never need to get the ID from the name.
The two ways I believe this can be done are either storing the information as a string or as a hash.
i.e:
1) String based:
set maps:0 "Main"
set maps:1 "Island"
etc (and maybe a maps:id to
store an auto increment value)
2) Hash based:
hset maps "0" "Main"
hset maps "1" "Island"
etc
My question is which way seems the best. Given that there will never be that many maps I'm leaning towards the single hashed object. Partially because this provides a nice method to return all the maps in existence. But is there any particular reason that the string based queries would be more useful.
Hopefully you can give me some clear information.
Thank you,
Pluckerpluck
The String based values are actually discouraged because it consumes a lot more memory than a hash.
Redis optimizes small hashes and encodes them in a memory efficient manner. This encoding is called zipmap (or ziplist in redis 2.6). See http://redis.io/topics/memory-optimization, specially the section "Use hashes when possible".