Error 500 internal server error when posting long forms content - sql

I created a long form with multiple fields that post the typed data to a Mariadb database.
The types are all set to TEXT as the field will contain long text.
The max_allowed_packet = 1073741824
The net_buffer_length = 1048576
The PHP post_max_size = 500M
The memory_limit = 550M
All this seems suficiant for very long text posting to database.
But apparently not, as it gives me a 500 error when I exceed 99100 characters :/ and works fine when I keep the posted text under that amount of characters.
What am I doing wrong ?
The error log shows : Code:500 Message:Code:500 Message: POST /login/resident/updateresident/7 HTTP/1.1
Thank you in advance!

Thanks for the reply !
Actually none of that worked....went through all php and apache and all the conf files that controls the upload ans post max .....
A brilliant guy sent me this link :
https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-20.html
It was a Mysql 5.6 bug !!!
Just updated it to 5.7 and pufff !! It worked !
GOD ! lost 3 days of my life because of this bug !

Related

Snowflake COPY INTO from JSON - ON_ERROR = CONTINUE - Weird Issue

I am trying to load JSON file from Staging area (S3) into Stage table using COPY INTO command.
Table:
create or replace TABLE stage_tableA (
RAW_JSON VARIANT NOT NULL
);
Copy Command:
copy into stage_tableA from #stgS3/filename_45.gz file_format = (format_name = 'file_json')
Got the below error when executing the above (sample provided)
SQL Error [100069] [22P02]: Error parsing JSON: document is too large, max size 16777216 bytes If you would like to continue loading
when an error is encountered, use other values such as 'SKIP_FILE' or
'CONTINUE' for the ON_ERROR option. For more information on loading
options, please run 'info loading_data' in a SQL client.
When I had put "ON_ERROR=CONTINUE" , records got partially loaded, i.e until the record with more than max size. But no records after the Error record was loaded.
Was "ON_ERROR=CONTINUE" supposed to skip only the record that has max size and load records before and after it ?
Yes, the ON_ERROR=CONTINUE skips the offending line and continues to load the rest of the file.
To help us provide more insight, can you answer the following:
How many records are in your file?
How many got loaded?
At what line was the error first encountered?
You can find this information using the COPY_HISTORY() table function
Try setting the option strip_outer_array = true for file format and attempt the loading again.
The considerations for loading large size semi-structured data are documented in the below article:
https://docs.snowflake.com/en/user-guide/semistructured-considerations.html
I partially agree with Chris. The ON_ERROR=CONTINUE option only helps if the there are in fact more than 1 JSON objects in the file. If it's 1 massive object then you would simply not get an error or the record loaded when using ON_ERROR=CONTINUE.
If you know your JSON payload is smaller than 16mb then definitely try the strip_outer_array = true. Also, if your JSON has a lot of nulls ("NULL") as values use the STRIP_NULL_VALUES = TRUE as this will slim your payload as well. Hope that helps.

X-Cart - SQL error notification (Error code : 1030)

I am working on xcart website for my company. Right now, I always get the error messages from my website http://mothersenvogue.com.kh/ as below:
[24-May-2015 08:50:51] (shop: 24-May-2015 15:50:51) SQL error:
Site : https://mothersenvogue.com.kh
Remote IP : 176.9.29.209
Logged as :
SQL query : SHOW FIELDS FROM xcart_session_history
Error code : 103
Description : Got error 28 from storage engine
Request URI: /secure_login.php?xid=025530538a738ddc86617a9aa81bc990
Backtrace:
/home/www/mothersenvogue.com.kh/include/func/func.db.php:189
/home/www/mothersenvogue.com.kh/include/func/func.db.php:115
/home/www/mothersenvogue.com.kh/include/func/func.db.php:384
/home/www/mothersenvogue.com.kh/include/func/func.db.php:630
/home/www/mothersenvogue.com.kh/include/func/func.db.php:458
/home/www/mothersenvogue.com.kh/include/sessions.php:161
/home/www/mothersenvogue.com.kh/init.php:524
/home/www/mothersenvogue.com.kh/preauth.php:51
/home/www/mothersenvogue.com.kh/auth.php:45
/home/www/mothersenvogue.com.kh/secure_login.php:37
-------------------------------------------------
Many error messages are from func.db.php, init.php, preauth.php, auth.php all at the same line number, and on the same SQL Query "SHOW" statement.
I tried to check all the above files at their given line number but I could not find anything wrong.
Pleasse kindly help advise me what is wrong with it? is it something wrong inside these files as I got many error messages sent to me by email with the similar content like above.
I was refered to here from my previous question in xcart forum, and here is my question there:
https://bt.x-cart.com/view.php?id=44717
Many thanks.
In most cases the file storage (the drive where your files are located on the hosting server, where you checked the free space) is physically located on a different virtual / physical server / drive. Most hosting companies use an optimized dedicated servers for MySQL.
Thus you see enough space in your account, but MySQL still reports that there is no space left on the drive (where MySQL server is currently running).
Thus the best way is to contact the hosting provider and find out what's the situation with the disk space on that very machine, where MySQL is running.

Quickfixn - Tag Appears More Than Once Rejection

I'm having an issue with Quickfixn and I'm hoping someone with more experience working with it can shed some light on an issue I'm facing. For some reason, messages are getting rejected by the QuickFix engine because of repeating tags... I expect to have repeating tags and so I set the UseDataDictionary flag = Y in my config file but messages are still getting rejected. Has anyone experienced a similar issue ?
The message I'm receiving looks like :
8=FIXT.1.1 9=421 35=AE 34=8 1128=8 49=XXX 56=YYY 52=20130501-15:45:53 552=1 54=2 37=130501-5 11=NOREF 826=0 78=1 79=default 80=1000000.00 5967=12167800.00 453=4 448=ITXT 452=3 447=D 448=TEST 452=1 447=D 448=LMEB 452=16 447=D 448=FRTB 452=11 447=D 571=6718487 150=F 32=1000000.00 15=USD 1056=12167800.00 31=12.1678 194=12.1678 195=0 64=20130503 63=0 60=20130501-00:00:00 75=20130501 1057=Y 460=4 167=FOR 65=SP 55=USD/MXN 10=203
8=FIXT.1.1 9=124 35=3 34=8 49=XXX 52=20130501-15:45:54.209 56=YYY 45=8 58=Tag appears more than once 371=448 372=AE 373=13 10=210
my config file looks like this:
[DEFAULT]
ConnectionType=initiator
HeartBtInt=30
ReconnectInterval=10
SocketReuseAddress=Y
FileStorePath=D:\Store
FileLogPath=D:\Log
[SESSION]
BeginString=FIXT.1.1
SenderCompID=XXX
TargetCompID=YYY
DefaultApplVerId = FIX.5.0SP1
UseDataDictionary=Y
AppDataDictionary=D:\Interface\FIX50SP1.xml
StartDay=sunday
StartTime=20:55:00
EndTime=06:05:00
EndDay=saturday
SocketConnectHost=1.1.1.1
SocketConnectPort=8443
Any help would be greatly appreciated! Thank you.
Often this happens because there is a field in a repeating group that is not specified in the DataDictionary. The parser sees the field and assumes the repeating group has ended. It continues parsing fields as if they are not part of a group. If it sees a duplicate field in this context, the parser will report an error.
You may clone and modify FIX Data Dictionary (D:\Interface\FIX50SP1.xml) to fit your needs, if you need to process "invalid" messages. Or you may disable message validation.

Bigquery : Unexpected. Please try again when loading a 53GB CSV/ 1.4GB gZIP

I was trying to load 1.4Gb gZIP data in to my BigQuery table and i am getting the error Unexpected. Please try again consistently
job_7f1aa8d29ae641459c82243530eb1c65
I was trying to load a structure Row ID,Order Priority,Discount,Unit Price,Shipping Cost,Customer ID,Customer Name,Ship Mode,Product Category,Product Sub-Category,Product Base Margin,Region,State or Province,City,Postal Code,Order Date,Ship Date,Profit,Quantity ordered new,Sales,Order ID
the error is not clear on whats going wrong.
anyone else encountered this error?
Thanks.
It looks like your job ran out of time-- a 53 GB CSV file is a lot to process in one thread. Best practice is to either split your data in multiple chunks, or upload uncompressed data which can be processed in parallel.
I'm in the process of raising the allowed time somewhat, and we'll work on improving the error message when this happens.

Rails 3.2.2 log files unordered, requests intertwined

I recollect getting log files that were nicely ordered, so that you could follow one request, then the next, and so on.
Now, the log files are, as my 4 year old says "all scroggled up", meaning that they are no longer separate, distinct chunks of text. Loggings from two requests get intertwined/mixed up.
For instance:
Started GET /foobar
...
Completed 200 OK in 2ms (Views: 0.4ms | ActiveRecord: 0.8ms)
Patient Load (wait, that's from another request that has nothing to do with foobar!)
[ blank space ]
Something else
This is maddening, because I can't tell what's happening within one single request.
This is running on Passenger.
I tried to search for the same answer but couldn't find any good info. I'm not sure if you should fix server or rails code.
If you want more info about the issue here is the commit that removed old way of logging https://github.com/rails/rails/commit/04ef93dae6d9cec616973c1110a33894ad4ba6ed
If you value production log readability over everything else you can use the
PassengerMaxInstancesPerApp 1
configuration. It might cause some scaling issues. Alternatively you could stuff something like this in application.rb:
process_log_filename = Rails.root + "log/#{Rails.env}-#{Process.pid}.log"
log_file = File.open(process_log_filename, 'a')
Rails.logger = ActiveSupport::BufferedLogger.new(log_file)
Yep!, they have made some changes in the ActiveSupport::BufferedLogger so it is not any more waiting until the request has ended to flush the logs:
http://news.ycombinator.com/item?id=4483390
https://github.com/rails/rails/commit/04ef93dae6d9cec616973c1110a33894ad4ba6ed
But they have added the ActiveSupport::TaggedLogging which is very funny and you can stamp every log with any kind of mark you want.
In your case could be good to stamp the logs with the request UUID like this:
# config/application.rb
config.log_tags = [:uuid]
Then even if the logs are messed up you still can follow which of them correspond to the request you are following up.
You can make more funny things with this feature to help you in your logs study:
How to log user_name in Rails?
http://zogovic.com/post/21138929607/running-time-in-rails-logs
Well, for me the TaggedLogging solution is a no go, I can live with some logs getting lost if the server crashes badly, but I want my logs to be perfectly ordered. So, following advice from the issue comments I'm applying this to my app:
# lib/sequential_logs.rb
module ActiveSupport
class BufferedLogger
def flush
#log_dest.flush
end
def respond_to?(method, include_private = false)
super
end
end
end
# config/initializers/sequential_logs.rb
require 'sequential_logs.rb'
Rails.logger.instance_variable_get(:#logger).instance_variable_get(:#log_dest).sync = false
As far as I can say this hasn't affected my app, it is still running and now my logs make sense again.
They should add some quasi-random reqid and write it in every line regarding one single request. This way you won't get confused.
I haven't used it, but I believe Lumberjack's unit_of_work method may be what you're looking for. You call:
Lumberjack.unit_of_work do
yield
end
And all logging done either in that block or in the yielded block are tagged with a unique ID.