What is the maximum file upload size allowed in the post_max_size and upload_max_filesize configuration options (in PHP 5.3)?
According to the manual entry about post_max_size:
Note:
PHP allows shortcuts for bit values, including K (kilo), M (mega) and G (giga).
PHP will do the conversions automatically if you use any
of these. Be careful not to exceed the 32 bit signed integer limit (if
you're using 32bit versions) as it will cause your script to fail.
Your limit could be 32bit signed integer limit. ~2,147,483,647 bytes on a 32 bit version. See the PHP_INT_MAX constant to get the value for your system:
PHP_INT_MAX (integer)
The largest integer supported in this build of PHP. Usually int(2147483647). Available since PHP 4.4.0 and PHP 5.0.5
Related:
How to have 64 bit integer on PHP?
There is no real limit set by PHP to post_max_size or upload_max_filesize. However both values must be smaller than memory_limit (also you can modify this). Anyway, as values use something (a lot) smaller than your RAM. A hacker may attempt to send a very large file that will consume completely your system resources. To upload large file is better to use an FTP server.
Related
Does H2 have a notion of a specific size limit for the BLOB data type? The documentation (https://h2database.com/html/datatypes.html#blob_type) states that you can optionally set a limit e.g. BLOB(10K), so does that mean that BLOB() is unlimited in size?
Similarly, the documentation lists TINYBLOB, MEDIUMBLOB etc. as acceptable keywords, but doesn't give any specific meaning for them. Are they simply aliases to BLOB for compatibility with other database dialects?
(I see that the BINARY type has a limit of 2Gb, which is what makes me think that BLOB doesn't have a limit since it's not specified.)
BINARY / VARBINARY data types are limited to available memory and they also have a strong limit slightly below 2 GB (it is limited to the maximum array size in Java). Note that BINARY should be used only when you have values with a known fixed size. In H2 1.4.200 BINARY is an alias for VARBINARY, but in the not yet released next version they are different.
BLOB values can be much larger. They aren't loaded into memory, they are streamed instead. There is some outdated information about limits in the documentation: https://h2database.com/html/advanced.html#limits_limitations
but this part of documentation was written for an old storage engine of H2, H2 uses another storage engine by default. Anyway, both engines support large binary and character objects.
TINYBLOB, MEDIUMBLOB, etc. don't have any special meaning, they are for compatibility only. Don't use them.
I am Using Aerospike 3.40. Bin with floating point value doesn't appear. I am using python client. Please help.
It is now supported in Aerospike 3.6 version
The server does not natively support floats. It supports integers, strings, bytes, lists, and maps. Different clients handle the unsupported types in different ways. The PHP client, for example, will serialize the other types such as boolean and float and store them in a bytes field, then deserialize them on reads. The Python client will be doing that starting with the next release (>= 1.0.38).
However, this approach has the limitation of making it difficult for different clients (PHP and Python, for example) to read such serialized data, as it's not serialized using a common format.
One common way to get around this with floats is to turn them into integers. For example, If you have a bin called 'currency' you can multiply the float by 100, chop off the mantissa, and store it as an integer. On the way out you simply divide by 100.
A similar method is to store the significant digits in one bin and the mantissa in another, both of them integer types, and recombine them on the read. So 123.456789 gets stored as v_sig and v_mantissa.
(v_sig, v_mantissa) = str(123.456789).split('.')
on read you would combine the two
v = float(v_sig)+float("0."+str(v_mantissa))
FYI, floats are now supported natively as doubles on the aerospike server versions >= 3.6.0. Most clients, such as the Python and PHP one supports casting floats to as_double.
Floating point number can be divided into two parts, before decimal point and after it and storing them in two bins and leveraging them in the application code.
However, creating more number of bins have performance overheads in Aerospike as a new malloc will be used per bin.
If switching from Python to any other language is not the use case, it is better to use a better serialization mechanism and save it in single bin. That would mean only one bin per floating number is used and also will reduce the data size in Aerospike. Lesser amount of data in Aerospike always helps in speed in terms of Network I/O which is the main aim of Caching.
Making an AMI and storing it to S3 using the ec2-bundle-vol/ec2-upload-bundle/ec2-register trifecta in AWS I get 36 10 MB image chunks. For a readability/testing standpoint I would much prefer something like 4 100 MB images or one 3.5 GB file.
I am not seeing an easy way to change this behavior without finding and reverse-engineering the Ruby Rails script wrapped in ec2-bundle-vol.
Alternately, is there a good reason for three dozen small files?
unfortunately there's no way without modifying the ruby script.
The chunk size is hardcoded
$AWS_PATH/amitools/ec2/lib/ec2/amitools/bundle.rb line 14
CHUNK_SIZE = 10 * 1024 * 1024 # 10 MB in bytes.
Eventually, it might not work to register the bundle if chunks have a different size
Using Redis, I am currently parameterizing the redis.conf for using virtual memory.
Regarding I have 18 millions of keys (max 25 chars) as hashtables with 4 fields (maximum 256 chars)
My server has 16 Go RAM.
I wonder how to optimize the parameters vm-page-size (more than 64 ?) and vm-pages.
Any ideas ? Thanks.
You probably don't need to in this case - your usage is pretty close to standard. It's only when your values are large ( > ~4k iirc) that you can run into issues with insufficient contiguous space.
Also, with 16GB available there won't be much swapping happening, which makes the vm config a lot less important.
What's the best way to store a file size in bytes in database?
Considering that the size can be huge MB, GB, TB...
I'm using bigint (max: 9.223.372.036.854.775.807), but is it the best way?
That's the type I would choose. It corresponds to the long type in c# (a 64 bit number), and it is the same type that is used by Windows to store file sizes.
A 64-bit integer is all you need.
If bigint has a maximum value of 9.223.372.036.854.775.807, then that suggests a signed 64-bit integer, which is perfectly adequate.
From the description, it does not look like 32-bit integers will do what you need, so unless you actually need to support larger sizes than 9.223.372.036.854.775.807, then bigint is the most efficient form you could possibly choose.
If you needed larger values (I can't imagine why), then you'd need to either store it as a string, or find a large-number library that will use as many bytes as neccessary to store the number (ie, has no maximum size).