How to do performance tuning of batching using max_batch_size, batch_timeout_micros, num_batch_threads and other parameters? Tried using these parameters with the Query client, it doesn't work.
In the below example I have 100 images and I want to batch in size of 10. The query runs for all images instead of 10.
bazel-bin/tensorflow_serving/example/demo_batch --server=localhost:9000 --max_batch_size=10
Also, for batch scheduling how to make it run every 10 secs after the first batch is done? Thanks.
I have met the same problem like you.
And I checked the source code of tf-serving, these parameters are in an protobuf file which defined in:
serving/tensorflow_serving/servables/tensorflow/session_bundle_config.proto
And I found the example conf file in:
serving/tensorflow_serving/servables/tensorflow/testdata/batching_config.txt
And I believe you could follow the batching_config.txt format, the parameters config should be work.
Hope it helps.
max_batch_size { value: 1024 }
batch_timeout_micros { value: 0 }
max_enqueued_batches { value: 1000000 }
num_batch_threads { value: 8 }
allowed_batch_sizes : 1
allowed_batch_sizes : 2
allowed_batch_sizes : 8
allowed_batch_sizes : 32
allowed_batch_sizes : 128
allowed_batch_sizes : 256
allowed_batch_sizes : 512
allowed_batch_sizes : 1024
Related
I'm a new to optaplanner. trying to implemete a bin packing solution to packing ecomcerce order items into carton containers, we have different containers size to hold all items.
from optaplanner, i'm follow the example case of cloudbalance to implement this bin packing. https://www.optaplanner.org/docs/optaplanner/latest/use-cases-and-examples/cloud-balancing/cloud-balancing.html
when I fist run out the result. seems not a optimized solution , not sure where is wrong in code.
public void run() throws IOException {
SolverFactory<CartonizationSolution> solverFactory = SolverFactory.create(new SolverConfig()
.withSolutionClass(CartonizationSolution.class)
.withEntityClasses(OrderItem.class)
.withConstraintProviderClass(CartonizationConstraintProvider.class)
.withTerminationConfig(new TerminationConfig().withUnimprovedSecondsSpentLimit(5L)));
Solver<CartonizationSolution> solver = solverFactory.buildSolver();
CartonizationSolution solution = load();
CartonizationSolution solvedSolution = solver.solve(solution);
ScoreManager<CartonizationSolution, HardSoftScore> scoreManager = ScoreManager.create(solverFactory);
ScoreExplanation<CartonizationSolution, HardSoftScore> cartonizationSolutionHardSoftScoreScoreExplanation = scoreManager.explainScore(solution);
System.out.println(scoreManager.getSummary(solution));
System.out.println("Planning items: " + solution.getOrderItems().size());
System.out.println("Planning cartons: " + solution.getCartonRange().size());
System.out.println("\nSolved CartonizationSolution:\n"
+ toDisplayString(solvedSolution));
}
Total Container be grouped: 4
Type: Small -> 4
CartonContainer#Small#3: 8 items
Volume Usage 97.005356% 13037520/13440000
Weight Usage 34.233334% 5135/15000
CartonContainer#Small#1: 10 items
Volume Usage 99.417336% 13361690/13440000
Weight Usage 24.633333% 3695/15000
CartonContainer#Small#4: 11 items
Volume Usage 75.845314% 10193610/13440000
Weight Usage 27.333334% 4100/15000
CartonContainer#Small#2: 12 items
Volume Usage 99.58103% 13383690/13440000
Weight Usage 91.64% 13746/15000
Total Volum: 53760000
public class CartonizationConstraintProvider implements ConstraintProvider {
#Override
public Constraint[] defineConstraints(ConstraintFactory constraintFactory) {
return new Constraint[]{
requiredWeightTotal(constraintFactory),
requiredVolumeTotal(constraintFactory),
computerCost(constraintFactory)
};
}
Constraint requiredWeightTotal(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(OrderItem.class)
.groupBy(OrderItem::getContainer, sum(OrderItem::getWeight))
.filter((container, requiredWeight) -> requiredWeight > container.getMaxWeight())
.penalize(HardSoftScore.ONE_HARD,
(container, requiredWeight) -> requiredWeight - container.getMaxWeight())
.asConstraint("requiredWeightTotal");
}
Constraint requiredVolumeTotal(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(OrderItem.class)
.groupBy(OrderItem::getContainer, sum(OrderItem::getVolume))
.filter((container, requiredVolume) -> requiredVolume > container.getMaxVolume())
.penalize(HardSoftScore.ONE_HARD,
(container, requiredVolume) -> requiredVolume - container.getMaxVolume())
.asConstraint("requiredVolumeTotal");
}
Constraint computerCost(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CartonContainer.class)
.ifExists(OrderItem.class, equal(Function.identity(), OrderItem::getContainer))
.penalize(HardSoftScore.ONE_SOFT, CartonContainer::getMaxVolume)
.asConstraint("overallVolume");
}
}
Similar data running with google's OR-Tools.
I can get bellow result.
Number of Items be planning: 41
Number of Carton be planning: 30
<generator object cartonize.<locals>.<genexpr> at 0x1057f3530>
Bin number #0 Small
Items packed: 9
Total weight: 42% 6.2909999999999995 / 15.0
Total volume: 99% 13320.85 / 13440.0
Bin number #15 Small
Items packed: 14
Total weight: 78% 11.686000000000002 / 15.0
Total volume: 99% 13269.66 / 13440.0
Bin number #25 Medium
Items packed: 18
Total weight: 58% 8.698999999999998 / 15.0
Total volume: 99% 23386.0 / 23520.0
Number of bins used: 3
Total volume 50400.0
Time = 1138 milliseconds
Should a close solution result as OR Tools, as the total volume is lower
Please enable subpillar change and swap moves.
I am reading through perl6intro on lazy lists and it leaves me confused about certain things.
Take this example:
sub foo($x) {
$x**2
}
my $alist = (1,2, &foo ... ^ * > 100);
will give me (1 2 4 16 256), it will square the same number until it exceeds 100. I want this to give me (1 4 9 16 25 .. ), so instead of squaring the same number, to advance a number x by 1 (or another given "step"), foo x, and so on.
Is it possible to achieve this in this specific case?
Another question I have on lazy lists is the following:
In Haskell, there is a takeWhile function, does something similar exist in Perl6?
I want this to give me (1 4 9 16 25 .. )
The easiest way to get that sequence, would be:
my #a = (1..*).map(* ** 2); # using a Whatever-expression
my #a = (1..*).map(&foo); # using your `foo` function
...or if you prefer to write it in a way that resembles a Haskell/Python list comprehension:
my #a = ($_ ** 2 for 1..*); # using an in-line expression
my #a = (foo $_ for 1..*); # using your `foo` function
While it is possible to go out of one's way to express this sequence via the ... operator (as Brad Gilbert's answer and raiph's answer demonstrate), it doesn't really make sense, as the purpose of that operator is to generate sequences where each element is derived from the previous element(s) using a consistent rule.
Use the best tool for each job:
If a sequence is easiest to express iteratively (e.g. Fibonacci sequence):
Use the ... operator.
If a sequence is easiest to express as a closed formula (e.g. sequence of squares):
Use map or for.
Here is how you could write a Perl 6 equivalent of Haskell's takewhile.
sub take-while ( &condition, Iterable \sequence ){
my \iterator = sequence.iterator;
my \generator = gather loop {
my \value = iterator.pull-one;
last if value =:= IterationEnd or !condition(value);
take value;
}
# should propagate the laziness of the sequence
sequence.is-lazy
?? generator.lazy
!! generator
}
I should probably also show an implementation of dropwhile.
sub drop-while ( &condition, Iterable \sequence ){
my \iterator = sequence.iterator;
GATHER: my \generator = gather {
# drop initial values
loop {
my \value = iterator.pull-one;
# if the iterator is out of values, stop everything
last GATHER if value =:= IterationEnd;
unless condition(value) {
# need to take this so it doesn't get lost
take value;
# continue onto next loop
last;
}
}
# take everything else
loop {
my \value = iterator.pull-one;
last if value =:= IterationEnd;
take value
}
}
sequence.is-lazy
?? generator.lazy
!! generator
}
These are only just-get-it-working examples.
It could be argued that these are worth adding as methods to lists/iterables.
You could (but probably shouldn't) implement these with the sequence generator syntax.
sub take-while ( &condition, Iterable \sequence ){
my \iterator = sequence.iterator;
my \generator = { iterator.pull-one } …^ { !condition $_ }
sequence.is-lazy ?? generator.lazy !! generator
}
sub drop-while ( &condition, Iterable \sequence ){
my \end-condition = sequence.is-lazy ?? * !! { False };
my \iterator = sequence.iterator;
my $first;
loop {
$first := iterator.pull-one;
last if $first =:= IterationEnd;
last unless condition($first);
}
# I could have shoved the loop above into a do block
# and placed it where 「$first」 is below
$first, { iterator.pull-one } … end-condition
}
If they were added to Perl 6/Rakudo, they would likely be implemented with Iterator classes.
( I might just go and add them. )
A direct implementation of what you are asking for is something like:
do {
my $x = 0;
{ (++$x)² } …^ * > 100
}
Which can be done with state variables:
{ ( ++(state $x = 0) )² } …^ * > 100
And a state variable that isn't used outside of declaring it doesn't need a name.
( A scalar variable starts out as an undefined Any, which becomes 0 in a numeric context )
{ (++( $ ))² } …^ * > 100
{ (++$)² } …^ * > 100
If you need to initialize the anonymous state variable, you can use the defined-or operator // combined with the equal meta-operator =.
{ (++( $ //= 5))² } …^ * > 100
In some simple cases you don't have to tell the sequence generator how to calculate the next values.
In such cases the ending condition can also be simplified.
say 1,2,4 ...^ 100
# (1 2 4 8 16 32 64)
The only other time you can safely simplify the ending condition is if you know that it will stop on the value.
say 1, { $_ * 2 } ... 64;
# (1 2 4 8 16 32 64)
say 1, { $_ * 2 } ... 3;
# (1 2 4 8 16 32 64 128 256 512 ...)
I want this to give me (1 4 9 16 25 .. )
my #alist = {(++$)²} ... Inf;
say #alist[^10]; # (1 4 9 16 25 36 49 64 81 100)
The {…} is an arbitrary block of code. It is invoked for each value of a sequence when used as the LHS of the ... sequence operator.
The (…)² evaluates to the square of the expression inside the parens. (I could have written (…) ** 2 to mean the same thing.)
The ++$ returns 1, 2, 3, 4, 5, 6 … by combining a pre-increment ++ (add one) with a $ variable.
In Haskell, there is a takeWhile function, does something similar exist in Perl6?
Replace the Inf from the above sequence with the desired end condition:
my #alist = {(++$)²} ... * > 70; # stop at step that goes past 70
say #alist; # [1 4 9 16 25 36 49 64 81]
my #alist = {(++$)²} ...^ * > 70; # stop at step before step past 70
say #alist; # [1 4 9 16 25 36 49 64]
Note how the ... and ...^ variants of the sequence operator provide the two variations on the stop condition. I note in your original question you have ... ^ * > 70, not ...^ * > 70. Because the ^ in the latter is detached from the ... it has a different meaning. See Brad's comment.
I've been tasked to replace C++ code to Go and I'm quite new to the Go APIs. I am using gob for encoding hundreds of key/value entries to disk pages but the gob encoding has too much bloat that's not needed.
package main
import (
"bytes"
"encoding/gob"
"fmt"
)
type Entry struct {
Key string
Val string
}
func main() {
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
e := Entry { "k1", "v1" }
enc.Encode(e)
fmt.Println(buf.Bytes())
}
This produces a lot of bloat that I don't need:
[35 255 129 3 1 1 5 69 110 116 114 121 1 255 130 0 1 2 1 3 75 101 121 1 12 0 1 3 86 97 108 1 12 0 0 0 11 255 130 1 2 107 49 1 2 118 49 0]
I want to serialize each string's len followed by the raw bytes like:
[0 0 0 2 107 49 0 0 0 2 118 49]
I am saving millions of entries so the additional bloat in the encoding increases the file size by roughly x10.
How can I serialize it to the latter without manual coding?
If you zip a file named a.txt containing the text "hello" (which is 5 characters), the result zip will be around 115 bytes. Does this mean the zip format is not efficient to compress text files? Certainly not. There is an overhead. If the file contains "hello" a hundred times (500 bytes), zipping it will result in a file being 120 bytes! 1x"hello" => 115 bytes, 100x"hello" => 120 bytes! We added 495 byes, and yet the compressed size only increased by 5 bytes.
Something similar is happening with the encoding/gob package:
The implementation compiles a custom codec for each data type in the stream and is most efficient when a single Encoder is used to transmit a stream of values, amortizing the cost of compilation.
When you "first" serialize a value of a type, the definition of the type also has to be included / transmitted, so the decoder can properly interpret and decode the stream:
A stream of gobs is self-describing. Each data item in the stream is preceded by a specification of its type, expressed in terms of a small set of predefined types.
Let's return to your example:
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
e := Entry{"k1", "v1"}
enc.Encode(e)
fmt.Println(buf.Len())
It prints:
48
Now let's encode a few more of the same type:
enc.Encode(e)
fmt.Println(buf.Len())
enc.Encode(e)
fmt.Println(buf.Len())
Now the output is:
60
72
Try it on the Go Playground.
Analyzing the results:
Additional values of the same Entry type only cost 12 bytes, while the first is 48 bytes because the type definition is also included (which is ~26 bytes), but that is a one-time overhead.
So basically you transmit 2 strings: "k1" and "v1" which are 4 bytes, and the length of strings also has to be included, using 4 bytes (size of int on 32-bit architectures) gives you the 12 bytes, which is the "minimum". (Yes, you could use a smaller type for length, but that would have its limitations. A variable-length encoding would be a better choice for small numbers, see encoding/binary package.)
All in all, encoding/gob does a pretty good job for your needs. Don't get fooled by initial impressions.
If this 12 bytes for one Entry is too "much" for you, you can always wrap the stream into a compress/flate or compress/gzip writer to further reduce the size (in exchange for slower encoding/decoding and slightly higher memory requirement for the process).
Demonstration:
Let's test the following 5 solutions:
Using a "naked" output (no compression)
Using compress/flate to compress the output of encoding/gob
Using compress/zlib to compress the output of encoding/gob
Using compress/gzip to compress the output of encoding/gob
Using github.com/dsnet/compress/bzip2 to compress the output of encoding/gob
We will write a thousand entries, changing keys and values of each, being "k000", "v000", "k001", "v001" etc. This means the uncompressed size of an Entry is 4 byte + 4 byte + 4 byte + 4 byte = 16 bytes (2x4 bytes text, 2x4 byte lengths).
The code looks like this:
for _, name := range []string{"Naked", "flate", "zlib", "gzip", "bzip2"} {
buf := &bytes.Buffer{}
var out io.Writer
switch name {
case "Naked":
out = buf
case "flate":
out, _ = flate.NewWriter(buf, flate.DefaultCompression)
case "zlib":
out, _ = zlib.NewWriterLevel(buf, zlib.DefaultCompression)
case "gzip":
out = gzip.NewWriter(buf)
case "bzip2":
out, _ = bzip2.NewWriter(buf, nil)
}
enc := gob.NewEncoder(out)
e := Entry{}
for i := 0; i < 1000; i++ {
e.Key = fmt.Sprintf("k%3d", i)
e.Val = fmt.Sprintf("v%3d", i)
enc.Encode(e)
}
if c, ok := out.(io.Closer); ok {
c.Close()
}
fmt.Printf("[%5s] Length: %5d, average: %5.2f / Entry\n",
name, buf.Len(), float64(buf.Len())/1000)
}
Output:
[Naked] Length: 16036, average: 16.04 / Entry
[flate] Length: 4120, average: 4.12 / Entry
[ zlib] Length: 4126, average: 4.13 / Entry
[ gzip] Length: 4138, average: 4.14 / Entry
[bzip2] Length: 2042, average: 2.04 / Entry
Try it on the Go Playground.
As you can see: the "naked" output is 16.04 bytes/Entry, just little over the calculated size (due to the one-time tiny overhead discussed above).
When you use flate, zlib or gzip to compress the output, you can reduce the output size to about 4.13 bytes/Entry, which is about ~26% of the theoretical size, I'm sure that satisfies you. If not, you can reach out to libs providing compression with higher efficiency like bzip2, which in the above example resulted in 2.04 bytes/Entry, being 12.7% of the theoretical size!
(Note that with "real-life" data the compression ratio would probably be a lot higher as the keys and values I used in the test are very similar and thus really well compressible; still ratio should be around 50% with real-life data).
Use protobuf to efficiently encode your data.
https://github.com/golang/protobuf
Your main would look like this:
package main
import (
"fmt"
"log"
"github.com/golang/protobuf/proto"
)
func main() {
e := &Entry{
Key: proto.String("k1"),
Val: proto.String("v1"),
}
data, err := proto.Marshal(e)
if err != nil {
log.Fatal("marshaling error: ", err)
}
fmt.Println(data)
}
You create a file, example.proto like this:
package main;
message Entry {
required string Key = 1;
required string Val = 2;
}
You generate the go code from the proto file by running:
$ protoc --go_out=. *.proto
You can examine the generated file, if you wish.
You can run and see the results output:
$ go run *.go
[10 2 107 49 18 2 118 49]
"Manual coding", you're so afraid of, is trivially done in Go using the standard encoding/binary package.
You appear to store string length values as 32-bit integers in big-endian format, so you can just go on and do just that in Go:
package main
import (
"bytes"
"encoding/binary"
"fmt"
"io"
)
func encode(w io.Writer, s string) (n int, err error) {
var hdr [4]byte
binary.BigEndian.PutUint32(hdr[:], uint32(len(s)))
n, err = w.Write(hdr[:])
if err != nil {
return
}
n2, err := io.WriteString(w, s)
n += n2
return
}
func main() {
var buf bytes.Buffer
for _, s := range []string{
"ab",
"cd",
"de",
} {
_, err := encode(&buf, s)
if err != nil {
panic(err)
}
}
fmt.Printf("%v\n", buf.Bytes())
}
Playground link.
Note that in this example I'm writing to a byte buffer, but that's for demonstration purposes only—since encode() writes to an io.Writer, you can pass it an opened file, a network socket and anything else implementing that interface.
I'm using JWNL library for search similarity. In the formula, hyponym count is needed. How i get number of Hyponym and Deep of synset in WordNet using JWNL ? Thanks
I have tried the code below. But when the program running there is error java heap space. My java heap space is 2 GB
//Method for count number of hyponym of synset
public double getHypo(Synset synset) throws JWNLException {
double hypo = PointerUtils.getInstance().getHyponymTree(synset).toList().size();
return hypo;
}
//Method for count deep of synset from root (deep root =1)
public double getDeep(Synset synset) throws JWNLException {
double deep = PointerUtils.getInstance().getHypernymTree(synset).toList().size() + 1;
return deep;
}
EDIT:
My php.ini has 256MB memory set:
;;;;;;;;;;;;;;;;;;;
; Resource Limits ;
;;;;;;;;;;;;;;;;;;;
max_execution_time = 250 ; Maximum execution time of each script, in seconds
max_input_time = 120 ; Maximum amount of time each script may spend parsing request data
;max_input_nesting_level = 64 ; Maximum input variable nesting level
memory_limit = 256MB ; Maximum amount of memory a script may consume (256MB)
So I had a certain PHP script which was not very well written and when I executed it the PHP ran out of memory and my PC froze. Before running the script I have increased the memory limit in php.ini. I have changed it back to the default value after.
Now the problem is it seems to have done something to my PHP installation. Every PHP script I execute now is telling me it has not enough memmory. The scripts that have worked before without an issue.
It seems like the one bad script I mentioned earlier is still running in the background somehow.
I have restarted PHP, Apache, I have restarted my PC and even went to sleep for 8 hours. The next thing in the morning I find out all PHP scripts are still running out of memory. What the hell?
I am getting errors like this everywhere now (with the file in the error changing of course) - with every single even the simplest PHP script:
Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 6144 bytes) in D:\data\o\WebLib\src\Db\Db.php on line 241
Fatal error (shutdown): Allowed memory size of 262144 bytes exhausted (tried to allocate 6144 bytes) in D:\data\o\WebLib\src\Db\Db.php on line 241
Ok here is the script (I have commented out the bad parts):
<?php
error_reporting(E_ALL);
define('BASE_PATH', dirname(__FILE__));
require_once(BASE_PATH.'/../WebLib/config/paths.php');
require_once(PATH_TO_LIB3D_SRC.'/PHPExcel/Classes/PHPExcel.php');
require_once(PATH_TO_LIB3D_SRC.'/PHPExcel/Classes/PHPExcel/Reader/IReadFilter.php');
///** Define a Read Filter class implementing PHPExcel_Reader_IReadFilter */
//class chunkReadFilter implements PHPExcel_Reader_IReadFilter {
// private $_startRow = 0;
// private $_endRow = 0;
// /** Set the list of rows that we want to read */
// public function setRows($startRow, $chunkSize)
// {
// $this->_startRow = $startRow;
// $this->_endRow = $startRow + $chunkSize;
// }
// public function readCell($column, $row, $worksheetName = '')
// {
// // Only read the heading row, and the rows that are configured in $this->_startRow and $this->_endRow
// if (($row == 1) || ($row >= $this->_startRow && $row < $this->_endRow)) {
// return true;
// }
// return false;
// }
//}
//
//function ReadXlsxTableIntoArray($theFilePath)
//{
// $arrayData =
// $arrayOriginalColumnNames =
// $arrayColumnNames = array();
//
// $inputFileType = 'Excel2007';
// /** Create a new Reader of the type defined in $inputFileType **/
// $objReader = PHPExcel_IOFactory::createReader($inputFileType);
// /** Define how many rows we want to read for each "chunk" **/
// $chunkSize = 10;
// /** Create a new Instance of our Read Filter **/
// $chunkFilter = new chunkReadFilter();
// /** Tell the Reader that we want to use the Read Filter that we've Instantiated **/
// $objReader->setReadFilter($chunkFilter);
// $objReader->setReadDataOnly(true);
// /** Loop to read our worksheet in "chunk size" blocks **/
// /** $startRow is set to 2 initially because we always read the headings in row #1 **/
// for ($startRow = 1; $startRow <= 65536; $startRow += $chunkSize) {
// /** Tell the Read Filter, the limits on which rows we want to read this iteration **/
// $chunkFilter->setRows($startRow,$chunkSize);
// /** Load only the rows that match our filter from $inputFileName to a PHPExcel Object **/
// $objPHPExcel = $objReader->load($theFilePath);
// // Do some processing here
//
// $rowIterator = $objPHPExcel->getActiveSheet()->getRowIterator();
// foreach($rowIterator as $row){
//
// $cellIterator = $row->getCellIterator();
// //$cellIterator->setIterateOnlyExistingCells(false); // Loop all cells, even if it is not set
// if(1 == $row->getRowIndex ()) {
// foreach ($cellIterator as $cell) {
// $value = $cell->getCalculatedValue();
// $arrayOriginalColumnNames[] = $value;
// // let's remove the diacritique
// $value = iconv('UTF-8', 'ISO-8859-1//TRANSLIT', $value);
// // and white spaces
// $valueExploded = explode(' ', $value);
// $value = '';
// // capitalize the first letter of each word
// foreach ($valueExploded as $word) {
// $value .= ucfirst($word);
// }
// $arrayColumnNames[] = $value;
// }
// continue;
// } else {
// $rowIndex = $row->getRowIndex();
// reset($arrayColumnNames);
// foreach ($cellIterator as $cell) {
// $arrayData[$rowIndex][current($arrayColumnNames)] = $cell->getCalculatedValue();
// next($arrayColumnNames);
// }
// }
//
// unset($cellIterator);
// }
//
// unset($rowIterator);
// }
//
// // Free up some of the memory
// $objPHPExcel->disconnectWorksheets();
// unset($objPHPExcel);
//
// return array($arrayOriginalColumnNames, $arrayColumnNames, $arrayData);
//}
//
//if (isset($_POST['uploadFile'])) {
// //list($tableOriginalColumnNames, $tableColumnNames, $tableData) = ReadXlsxTableIntoArray($_FILES['uploadedFile']['tmp_name']);
// //CreateXMLSchema($tableOriginalColumnNames, 'schema.xml');
// //echo GetReplaceDatabaseTableSQL('posta_prehlad_hp', $tableColumnNames, $tableData);
//}
Change the php.ini :
memory_limit = 256M
Note that you're not supposed to use MB or KB, but M or K
PHP expects the unit Megabyte to be denoted by the single letter M. You've specified 256MB. Notice the extra B.
Since PHP doesn't understand the unit MB, it falls back to the lowest known "named" unit: kilobyte (K).
Simply remove the extra B from your setting and it should properly read the value as 256 megabyte (256M)
Please see the following FAQ entry on data size units:
PHP: Using PHP - Manual
You have a memory_limit of 256K. Thats much to less in nearly all cases. The default value is 16M (since 5.2).
Are you sure you set your memory size back correctly? The error shows that your max memory is 262144 bytes, and that is a quarter of an MB. That's really low!
As reaction to your php settings: shouldn't that syntax be
memory_limit = 256M
I don't know if it accepts both M and MB, but it might not?
Hey Richard. The changes couldn't have been executed since PHP clearly states that you only have 256K set as limit. Look through the php.ini and all the other places. It could be located in a .htaccess file on the vhost/host.
Have you restarted Apache after you edited php.ini and increased the memory_limit = 256MB?