What is the size of the default block on hyperldger fabric? - block

I'm trying to create a estimation of the size of a chain if i create a new blockchain using hyperldger.
In order to have an idea of disk space usage i would like to know that is the average size of a default block in the hyperldger fabric.
Thank you before hand,
Best Regards

Bellow you can find default configuration provided for ordering service. You can actually control block size with BatchTimeout and BatchSize parameters, also note that this is pretty use case dependent as it relies on transaction size, i.e. the logic of your chaincode.
################################################################################
#
# SECTION: Orderer
#
# - This section defines the values to encode into a config transaction or
# genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults
# Orderer Type: The orderer implementation to start
# Available types are "solo" and "kafka"
OrdererType: solo
Addresses:
- orderer.example.com:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 98 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB

The value is configured:
################################################################################
# SECTION: Orderer
################################################################################
Orderer: &OrdererDefaults
OrdererType: solo
Addresses:
#- orderer0.ordererorg:7050
- orderer0:7050
Kafka:
Brokers:
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 98 MB
PreferredMaxBytes: 512 KB
Organizations:
The file is located in configtx.yaml, and it is defined in config.go.
// BatchSize contains configuration affecting the size of batches.
type BatchSize struct {
MaxMessageCount uint32 `yaml:"MaxMessageSize"`
AbsoluteMaxBytes uint32 `yaml:"AbsoluteMaxBytes"`
PreferredMaxBytes uint32 `yaml:"PreferredMaxBytes"`
}
The values are set according the configtx.yaml file above.

Related

Cannot serialize protocol buffer of type tensorflow.GraphDef as the serialized size 3459900923bytes would be larger than the limit (2147483647 bytes)

We are attempting to train a network of knee MRI through Niftynet. We have a spatial window_size = (400,400,400) with pixdim = (0.4,0.4,0.4). When we run these images with a lower window size (for example 160,160,160) - there is no problem and it works quite well, however when we increase the window_size to achieve higher resolution outputs we get an error: Cannot serialize protocol buffer of type tensorflow.GraphDef as the serialized size (3459900923 bytes) would be larger than the limit (2147483647 bytes).
This is due to a limit in protobuf and because Niftynet / Tensorflow have decided it should be int32 which gives maxvalue (2 ^ 32) / 2 = 2147483648. At the same time I have heard that protobuf should really be able to cope with uint64, which then will be able to handle a much larger number? Do you know if this can be manipulated in Tensorflow/Niftynet?

What does "max_batch_size" mean in tensorflow-serving batching_config.txt?

I'm using tensorflow-serving on GPUs with --enable-batching=true.
However, I'm a little confused with max_batch_size in batching_config.txt.
My client sends a input tensor with a tensor shape [-1, 1000] in a single gRPC request, dim0 ranges from (0, 200]. I set max_batch_size = 100 and receive an error:
"gRPC call return code: 3:Task size 158 is larger than maximum batch
size 100"
"gRPC call return code: 3:Task size 162 is larger than maximum batch
size 100"
Looks like max_batch_size limits dim0 of a single request, but tensorflow batches multiple requests to a batch, I thought it means the sum of request numbers.
Here is a direct description from the docs.
max_batch_size: The maximum size of any batch. This parameter governs
the throughput/latency tradeoff, and also avoids having batches that
are so large they exceed some resource constraint (e.g. GPU memory to
hold a batch's data).
In ML most of the time the first dimension represents a batch. So based on my understanding tensorflow serving confuses the value for the first dimension as a batch and issues errors whenever it is bigger than the allowed value. You can verify it by issuing some of the request where you manually control the first dimension to be lower than 100. I expect this to remove the error.
After that you can modify your inputs to be sent in a proper format.

Fatal Error Unable to allocate shared memory segment of 134217728 bytes: mmap: Cannot allocate memory (12)

Good morning, this is what i get from apache error
Fatal Error Unable to allocate shared memory segment of 134217728 bytes: mmap: Cannot allocate memory (12)
This is my ipcs -lm
------ Limiti della memoria condivisa --------
max number of segments = 4096
max seg size (kbytes) = 131072
max total shared memory (kbytes) = 536870912
dimensione min seg (byte) = 1
This is cat /etc/sysctl.conf
# Controls the default maxmimum size of a mesage queue
# kernel.msgmnb = 65536
# Controls the maximum size of a message, in bytes
# kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
# kernel.shmmax = 200000000
# Controls the maximum number of shared memory segments, in pages
# kernel.shmall = 50000
#
I've set unlimit to unlimited, and i've tryed all the things present on internet.
Can you please tell me what's wrong?
first of all,
please consider to remove # sign before
# kernel.shmmax = 200000000
# kernel.shmall = 50000
like this:
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 200000000
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 50000

Does Neo4j calculate JVM heap on Ubuntu?

In the neo4j-wrapper.conf file I see this:
# Java Heap Size: by default the Java heap size is dynamically
# calculated based on available system resources.
# Uncomment these lines to set specific initial and maximum
# heap size in MB.
#wrapper.java.initmemory=512
#wrapper.java.maxmemory=512
Does that mean that I should not worry about -Xms and -Xmx?
I've read elsewhere that -XX:ParallelGCThreads=4 -XX:+UseNUMA -XX:+UseConcMarkSweepGC would be good.
Should I add that on my Intel® Core™ i7-4770 Quad-Core Haswell 32 GB DDR3 RAM 2 x 240 GB 6 Gb/s SSD (Software-RAID 1) machine?
I would still configure it manually.
Set both to 12 GB and use the remaining 16GB for memory mapping in neo4j.properties. Try to match it to you store file sizes

File systems and the boot parameter block:

I read a tutorial about writing a bootloader. The author gave this as an example of a boot parameter block:
bootsector:
iOEM: .ascii "DevOS " # OEM String
iSectSize: .word 0x200 # bytes per sector
iClustSize: .byte 1 # sectors per cluster
iResSect: .word 1 # #of reserved sectors
iFatCnt: .byte 2 # #of FAT copies
iRootSize: .word 224 # size of root directory
iTotalSect: .word 2880 # total # of sectors if over 32 MB
iMedia: .byte 0xF0 # media Descriptor
iFatSize: .word 9 # size of each FAT
iTrackSect: .word 9 # sectors per track
iHeadCnt: .word 2 # number of read-write heads
iHiddenSect: .int 0 # number of hidden sectors
iSect32: .int 0 # # sectors for over 32 MB
iBootDrive: .byte 0 # holds drive that the boot sector came from
iReserved: .byte 0 # reserved, empty
iBootSign: .byte 0x29 # extended boot sector signature
iVolID: .ascii "seri" # disk serial
acVolumeLabel: .ascii "MYVOLUME " # volume label
acFSType: .ascii "FAT16 " # file system type
If I am using a FAT32 file system can I just change the last part (acFSType: ascii “FAT16 “) and use this boot parameter block? If not where can I get a boot parameter block for FAT32?
I asked Mike Saunders ( the author of Mike-OS) in an email and his answer was no. I can't use this table for FAT32 just by changing that (acFSType: ascii “FAT16 “)part. To get a boot parameter block for the FAT32 file-system he sent me this link to the Microsoft website.