Can we edit and add new field to CTxOut class in order to send additional information to transaction object and then to the blockchain?
I got the following answer from here
Note that you should only do this, putting extra data into the block chain, if it is really necessary. The block chain has to be stored by every full node, so try not to take up all our hard drive space with unnecessary stuff whenever possible.
With that said, if you do want to add extra data to your transaction, then add an additional output to the transaction, for which the scriptPubKey has the following form:
OP_RETURN {80 bytes of whatever data you want}
80 bytes was chosen because it is big enough for a 64 byte hash and 16 other extra bytes of data, but not big enough to store anything maliciously big (like a movie collection). This transaction output is automatically un-spendable, and so will not be kept in the UTXO set in any pruning. The other UTXOs from your transaction will still be safe.
Related
We use Chronicle Map as a persisted storage. As we have new data arriving all the time, we continue to put new data into the map. Thus we cannot predict the correct value for net.openhft.chronicle.map.ChronicleMapBuilder#entries(long). Chronicle 3 will not break when we put more data than expected, but will degrade performance. So we would like to recreate this map with new configuration from time to time.
Now it the real question: given a Chronicle Map file, how can we know which configuration was used for that file? Then we can compare it with actual amount of data (source of this knowledge is irrelevant here) and recreate a map if needed.
entries() is a high-level config, that is not stored internally. What is stored internally is the number of segments, expected number of entries per segment, and the number of "chunks" allocated in the segment's entry space. They are configured via ChronicleMapBuilder.actualSegments(), entriesPerSegment() and actualChunksPerSegmentTier() respectively. However, there is no way at the moment to query the last two numbers from the created ChronicleMap, so it doesn't help much. (You can query the number of segments via ChronicleMap.segments().)
You can contribute to Chronicle-Map by adding getters to ChronicleMap to expose those configurations. Or, you need to store the number of entries separately, e. g. in a file along with the ChronicleMap persisted file.
I'm receiving variable sized data in each simulation step in simulink. However I need to wait a certain amount of simulation steps before I received the whole data package and therefore I need some kind of variable sized buffer. I have no information about the total amount of data, which I'm going to receive. The only information I got, is the amount of simulation step, that I have to wait until I received the whole data.
I've tried to implement it via a matlab function block and several delay blocks that delay the output data of the matlab function block for one simulation step. but always failing at the variable size constraints (because the delay blocks doesn't support it) and I neither found any buffer block that supports the functionality, that I need here.
Hope, you can help me out!
Given that you know your input and output sample rates, I'd suggest writing a c-mex S-function.
It wouldn't be trivial, but you can
set the input and output ports to have different sample rates
set the input and output ports to have variable signal length
store a pointer to a std::vector<...> class in the P work vector
the std::vector<...> gives you the ability to increase its size as new input data arrives, and be emptied when the data is posted to the output.
Update based on comments:
For code generation you need to specify an upper bound for the size of the buffer, which makes a MATLAB Function block suitable.
Specify the maximum size of the buffer, and keep track of how much f it has been filled using an internal persistent variable.
But the only way to have a block with a different sample rate at its input and its output is to write an S-Function. For the MATLAB Function approach I can think of two approaches,
a) write the code so that it has an internal buffer that fills and only updates the output when the buffer becomes full.
Of course the output sample rate will be the same as the input sample rate, but the data will only change when you specify that it should.
b) have two outputs, one being the buffer, and one being an "I've just become full" logical signal. Then follow the block by a Triggered Subsystem that feeds the buffer straight through it, and is rising edge triggered by the logical signal. The output of the Triggered Subsystem will then only update at the steps when the buffer becomes full.
The basic gist of what I'm trying to accomplish is setting up an image processing server. As the page code is created in Coldfusion multiple images on the page may need to be resized and thumbnailed into appropriate sizes, each to a possibly different size and each with a possibly different algorithm.
The basic gist of how it works is using a simple img tag the src attribute will point to the image server along the lines of the following.
<img src="http://imageserver.com/<clientname>/<primarykey>.jpg">
This allows the image resizing to occur asynchronously, and on a different server, thus not slowing down the current page call.
When the image processing server receives the call it will first check if that file exists, if Apache determines the file exists it serves it right away, else, it invokes Coldfusion which reads an entry from the database using the primary key passed to it, to get the URL of the image to be processed and any associated parameters (in this case width, height, method, url, client, but possibly more in the future).
Currently I'm doing this using a hash system where the parameters are ordered alphabetically, and then hashed. Is that a reasonable system, or will hash collisions eventually occur even though the data being hashed is quite small (between 50 to 200 characters). Each client could likely store up to 10,000 images (in their own folder so hash collision would not be a problem cross-client).
To reduce DB calls, as the page processes, each time a processed image is desired, I add that image's information to an array. At the end of the page, I make 2 calls to the DB, first it checks if the rows in my array already exist in the DB, and then if necessary, it adds any rows that do not exist (storing their various parameters). The dilemma here is that the primarykey (or what goes in the image tag) must be known before it is actually inserted into the DB, this way I'm not checking at every single image as some pages could have hundreds of images on them and that would be very inefficient.
Are hash collisions not a concern with this sample size (10k images per client generated by 50-200 character strings)? What about if I did something simple like <width>_<height>_<hash>.jpg or put the images in folders like /<client>/<width>x<height>/<hash>.jpg because that would further reduce the possibility of hash collisions (although not remove them)?
Any advice?
How are you hashing? Use SHA-512 for the hashing algorithm and you'll get a string 128 characters long. You may not want a URL so long, but the idea here is that you can minimize collisions via more complex algorithms.
http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec22c24-7c52.html
Even though I doubt you would have to worry about hash collisions, you may want to just use a UUID.
http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec22c24-70de.html
EDIT: Or use a uniqueidentifer as the primary key of the table you are storing the file to. Then after an insert, you can use the OUTPUT clause of a query to return the key to be used however you want.
The method I resolved this was by hashing not only the filename but it's parameters, such as width and height. Thus, the possibility of hash collisions is basically 0 until we hit millions (billions?) of records. So far we have no hash collisions.
My java/groovy program receives table names and table fields from the user input, it queries the tables in SAP and returns its contents.
The user input may concern the tables CDPOS and CDHDR. After reading the SAP documentations and googling, I found these are tables storing change document logs. But I did not find any remote call functions that can be used in java to perform this kind of queries.
Then I used the deprecated RFC Function Module RFC_READ_TABLE and tried to build up customized queries only depending on this RFC. However, I found if the number of desired fields I passed to this RFC are more than 2, I always got the DATA_BUFFER_EXCEEDED error even if I limit the max rows.
I am not authorized to be an ABAP developer in the SAP system and can not add any FM to existing systems, so I can only write code to accomplish this requirement in JAVA.
Am I doing something wrong? Could you give me some hints on that issue?
DATA_BUFFER_EXCEEDED only happens if the total width of the fields you want to read exceeds the width of the DATA parameter, which may vary depending on the SAP release - 512 characters for current systems. It has nothing to do with the number of rows, but the size of a single dataset.
So the question is: What are the contents of the FIELDS parameter? If it's empty, this means "read all fields." CDHDR is 192 characters in width, so I'd assume that the problem is CDPOS which is 774 characters wide. The main issue would be the fields VALUE_OLD and VALUE_NEW, both 245 Characters.
Even if you don't get developer access, you should prod someone to get read-only dictionary access to be able to examine the structures in detail.
Shameless plug: RCER contains a wrapper class for RFC_READ_TABLE that takes care of field handling and ensures that the total width of the selected fields is below the limit imposed by the function module.
Also be aware that these tables can be HUGE in production environments - think billions of entries. You can easily bring your database to a grinding halt by performing excessive read operations on these tables.
PS: RFC_READ_TABLE is not released for customer use as per SAP note 382318, and the note 758278 recommends to create your own function module and provides a template with an improved logic.
Use BBP_RFC_READ_TABLE instead
There is a way around the DATA_BUFFER_EXCEED error. Although this function is not released for customer use as per SAP OSS note 382318, you can get around this issue with changes to the way you pass parameters to this function. Its not a single field that is causing your error, but if the row of data exceeds 512 bytes this error will be raised. CDPOS will have this issue for sure!
The work around if you know how to call the function using Jco and pass table parameters is to specify the exact fields you want returned. You then can keep your returned results under the 512 byte limit.
Using your example of table CDPOS, specify something like this and you should be good to go...(be careful, CDPOS can get massive! You should specify and pass a where clause!)
FIELDS = 'OBJECTCLAS'....
FIELDS = 'OBJECTID'
In Java it can be expressed as..
listParams.setValue(this.getpObjectclas(), "OBJECTCLAS");
By limiting the fields you are returning you can avoid this error.
I need to store items of varying length in a circular queue in a flash chip. Each item will have its encapsulation so I can figure out how big it is and where the next item begins. When there are enough items in the buffer, it will wrap to the beginning.
What is a good way to store a circular queue in a flash chip?
There is a possibility of tens of thousands of items I would like to store. So starting at the beginning and reading to the end of the buffer is not ideal because it will take time to search to the end.
Also, because it is circular, I need to be able to distinguish the first item from the last.
The last problem is that this is stored in flash, so erasing each block is both time consuming and can only be done a set number of times for each block.
First, block management:
Put a smaller header at the start of each block. The main thing you need to keep track of the "oldest" and "newest" is a block number, which simply increments modulo k. k must be greater than your total number of blocks. Ideally, make k less than your MAX value (e.g. 0xFFFF) so you can easily tell what is an erased block.
At start-up, your code reads the headers of each block in turn, and locates the first and last blocks in the sequence that is ni+1 = (ni + 1) MODULO k. Take care not to get confused by erased blocks (block number is e.g. 0xFFFF) or data that is somehow corrupted (e.g. incomplete erase).
Within each block
Each block initially starts empty (each byte is 0xFF). Each record is simply written one after the other. If you have fixed-size records, then you can access it with a simple index. If you have variable-size records, then to read it you have to scan from the start of the block, linked-list style.
If you want to have variable-size records, but avoid linear scan, then you could have a well defined header on each record. E.g. use 0 as a record delimiter, and COBS-encode (or COBS/R-encode) each record. Or use a byte of your choice as a delimiter, and 'escape' that byte if it occurs in each record (similar to the PPP protocol).
At start-up, once you know your latest block, you can do a linear scan for the latest record. Or if you have fixed-size records or record delimiters, you could do a binary search.
Erase scheduling
For some Flash memory chips, erasing a block can take significant time--e.g. 5 seconds. Consider scheduling an erase as a background task a bit "ahead of time". E.g. when the current block is x% full, then start erasing the next block.
Record numbering
You may want to number records. The way I've done it in the past is to put, in the header of each block, the record number of the first record. Then the software has to keep count of the numbers of each record within the block.
Checksum or CRC
If you want to detect corrupted data (e.g. incomplete writes or erases due to unexpected power failure), then you can add a checksum or CRC to each record, and perhaps to the block header. Note the block header CRC would only cover the header itself, not the records, since it could not be re-written when each new record is written.
Keep a separate block that contains a pointer to the start of the first record and the end of the last record. You can also keep more information like the total number of records, etc.
Until you initially run out of space, adding records is as simple as writing them to the end of the buffer and updating the tail pointer.
As you need to reclaim space, delete enough records so that you can fit your current record. Update the head pointer as you delete records.
You'll need to keep track of how much extra space has been freed. If you keep a pointer to end of the last record, the next time you need to add a record, you can compare that with the pointer to the first record to determine if you need to delete any more records.
Also, if this is NAND, you or the flash controller will need to do deblocking and wear-leveling, but that should all be at a lower layer than allocating space for the circular buffer.
I think I get it now. It seems like your largest issue will be, having filled the available space for recording, what happens next? The new data should overwrite the oldest data, which is I believe what you mean by a circular buffer. But since the data is not fixed length you may overwrite more than one record.
I'm assuming that the amount of variability in length is high enough that padding everything out to a fixed length isn't an option.
Your write segment needs to keep track of the address that represents the start of the next record to write. If you know the size of a block to write ahead of time, you can tell if you are going to end up at the end of the logical buffer and start over at '0'. I wouldn't split a record up with some at the end and some at the beginning.
A separate register can track the beginning; this is the oldest data that hasn't been overwritten yet. If you went to read out the data this is where you would start.
The data writer then would check, given the write-start address and the length of data its about to commit, if it should bump the read register, which would examine the first block and see the length, and advance to the next record, until there is enough room to write whatever the data is. There will be a gap of junk data that lives between the end of the written data and the start of the oldest data, probably. But this way, you can just be writing an address or two as overhead, and not rearranging blocks.
At least, that's probably what I would do. HTH
The "circular" in a flash can be done on basis of block size, which means that you must declare how much blocks of the flash you allocate for this buffer.
The actual size of the buffer will be at each particular time between n-1 (n is the number of blocks) and n.
Each block should start with an header that contains sequential number or timestamp that could be used to determine which block is older than the other.
Each Item encapsulated with an header and a footer. the default header contains whatever you want but according to this header you must know the size of the item. The default footer is 0xFFFFFFFF. This value indicates a null termination.
In your RAM you must save a pointer to the oldest block and the latest block and pointer to the oldest item and latest item. On power up you go over all blocks find the relevant blocks and load this members.
When you want to store a new item, you check if the latest block contain enough space for this item. If it does you save the item at the end of the previous item and the change the previous footer to point to this item. If it does not contain enough space you need to erase the oldest block. Before you erase this block change the oldest block members (RAM) to point on the next block and the oldest item to point on the first item in this block.
Then you can save the new item in this block and change the footer of the latest item to point this item.
I know that the explanation may sounds complicated but the process is very simple and if you write it correct you can make it even power fail safe (always keep in you mind the order of the writes).
Pay attention that the circularity of the buffer is not saved in the flash but the flash only contains a blocks with items that you can decide according to the blocks headers and items headers what is the order of these items
I see three options:
option1: is to pad everything out to the same size, this is simple, store a pointer to the head and tail of the buffer so you know where to write and where to start reading from, use the size of each object to get an offset to the next, this means you need to transverse the buffer as you would a linked list, aka its slow if you need item 5000.
option2: is to store only pointers to the real data in the circular buffer, that way when you loop around you don't have to deal with size mis-matchs. if you store the real data in a circular buffer and don't pad it out you could run into a situations where your over witting multiple items with 1 new data object, i assume this is not ok.
store the actual data elsewhere in flash, most flash will have some sort of wear leveling built in, if so you don't need to worry about overwriting the same location multiple times, the IC will figure out where to actually store it on chip, just write to to the next available free space.
this means you need to pick a maximum size for the circular buffer how you do this depends on the data variability. If the size of the data just change much, say by only a few bytes, then you should just pad it out and use option 1. If the size changes wildly and unpredictably, choose the largest size it could be and figure out how many objects of that size would fit in your flash, use that as the max number of entries in the buffer. This means you waste a bunch of space.
option 3: if the object can really be any size, your at the point where you should just use a file system, name the files in order and loop back when your full keeping in mind if your new entry is large you may have to delete multiple old entries to fit it in. This is really just an extension of option 2 as option2 is in many ways a simple file system.