PHP parse error on first hit, then segmentation fault [closed] - apache

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
I am trying to bring to life an old website for demonstration purposes. I am stuck with a PHP parse error and I can't find what it's about!
Here is the error I get (apache log) the first time I hit the page:
[error] [client 127.0.0.1] PHP Parse error:
parse error, expecting `T_STRING' or `'('' in .../functions.php on line 4
(the line return is for readability only) I end up with a 500 error.
Here is the only line I get the second time I hit the page:
[notice] child pid 3734 exit signal Segmentation fault (11)
This time I end up with a 324: ERR_EMPTY_RESPONSE.
Here is the code in the functions.php file, please don't look at the code it's very old ;).
<?php
// GoTo
function GoTo($page)
{
global $FullPath;
#header('Location:'.$FullPath.$page);
echo "<script language='Javascript'>
window.location='$page';
</script>";
}
Do you see the parse error I am missing??
Why do I get a segfault the second time?

You're using a newer version of PHP than you were when the site first came into existence, and goto has (sadly) been a keyword since PHP 5.3. Rename your function (:

Do not use goto as function name.
The goto operator can be used to jump to another section in the program
Change name of the function.
Also keep in mind that you need to exit the script after header('Location: xxx'); and make sure you don't have any output before that header.

Related

Determine actual errors from a load job

Using the Java SDK I am creating a load job for just a single record with a fairly complicated schema. When monitoring the status of the load job, it takes a surprisingly long time (but perhaps this is due to working out the schema), but then says:
11:21:06.975 [main] INFO xxx.GoogleBigQuery - Job status (21694ms) create_scans_1384744805079_172221126: DONE
11:24:50.618 [main] ERROR xxx.GoogleBigQuery - Job create_scans_1384744805079_172221126 caused error (invalid) with message
Too many errors encountered. Limit is: 0.
11:24:50.810 [main] ERROR xxx.GoogleBigQuery - {
"message" : "Too many errors encountered. Limit is: 0.",
"reason" : "invalid"
?}
BTW - how do I tell the job that it can have more than zero errors using Java?
This load job does not appear in the list of recent jobs in the console, and as far as I can see, none of the Java objects contains any more details about the actual errors encountered. So how can I pro-grammatically find out what is going wrong? All I can find is:
if (err != null) {
log.error("Job {} caused error ({}) with message\n{}", jobID, err.getReason(), err.getMessage());
try {
log.error(err.toPrettyString());
}
...
In general I am having a difficult time finding good documentation for some of these things and am working it out by trial and error and short snippets of code found on here and older groups. If there is a better source of information than the getting started guides, then I would appreciate any pointers to that information. The Javadoc does not really help and I cannot find any complete examples of loading, querying, testing for errors, cataloging errors and so on.
This job is submitted via a NEWLINE_DELIMITIED_JSON record, supplied to the job via:
InputStream dummy = getClass().getResourceAsStream("/googlebigquery/xxx.record");
final InputStreamContent jsonIn = new InputStreamContent("application/octet-stream", dummy);
createTableJob = bigQuery.jobs().insert(projectId, loadJob, jsonIn).execute();
My authentication and so on seems to work correctly as separate Java code to list the projects, and the datasets in the project all works correctly. So I just need help in working what the actual error is - does it not like the schema (I have records nested within records for instance), or does it think that there is an error in the data I am submitting.
Thanks in advance for any help. The job number cited above is an actual failed load job if that helps any Google staffers who might read this.
It sounds like you have a couple of questions, so I'll try to address them all.
First, the way to get the status of the job that failed is to call jobs().get(jobId), which returns a job object that has an errorResult object that has the error that caused the job to fail (e.g. "too many errors"). The errorStream list is a lost of all of the errors on the job, which should tell you which lines hit errors.
Note if you have the job id, it may be easier to use bq to lookup the job -- you can run bq show <job_id> to get the job error information. If you add the --format=prettyjson it will print out all of the information in the job.
A hint you also might want to consider is to supply your own job id when you create the job -- then even if there is an error starting the job (i.e. the insert() call fails, perhaps due to a network error) you can look up the job to see what actually happened.
To tell BigQuery that some errors are allowed during import, you can use the maxBadResults setting in the load job. See https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/java/latest/com/google/api/services/bigquery/model/JobConfigurationLoad.html#getMaxBadRecords().

Works on localhost but fails on Heroku [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
I am using a delayed_jobs to run a task in the background.
I am starting a task using ajax, the worker gets some uuid and writes to cache the status of the task and the result.
then I use another ajax to poll every second and see if I got a result.
It works well on my localhost, but when I upload to heroku it does not.
I checked the logs, and I can see that the worker can read the cache it has writen, but when the main thread tries to access it its empty.
I am using thin server, memcachier and dalli.
This is the code used to write to the cache:
def self.get_meta_info(link_url,job_uuid)
begin
#..........
result = {
title: "stuff here..."
}
#..........
Rails.cache.write({job_uuid: job_uuid,type: 'result'},result.to_json)
Rails.cache.write({job_uuid: job_uuid,type: 'status'},'success')
#the next to lines return the data in the logs
Rails.logger.info("get_meta_info written to hash at #{job_uuid}")
Rails.logger.info("get_meta_info result for #{job_uuid} was: #{Rails.cache.read({job_uuid: job_uuid,type: 'result'})}")
rescue Exception => ex
Rails.cache.write({job_uuid: job_uuid,type: 'result'},ex)
Rails.cache.write({job_uuid: job_uuid,type: 'status'},'error')
end
end
This is the server side code I use for polling: (it is called by ajax every second)
def get_meta_info_result
job_uuid = params[:job_uuid]
status = Rails.cache.read({job_uuid: job_uuid,type: 'status'})
#the next to lines return nothing in the logs
Rails.logger.info("nlp_provider_controller.get_meta_info_result for uuid #{job_uuid} read status #{status}")
Rails.logger.info("nlp_provider_controller.get_meta_info_result for uuid #{job_uuid} read result #{Rails.cache.read({job_uuid: job_uuid,type: 'result'})}")
respond_to do |format|
if status=='success'
format.json {render json: Rails.cache.read({job_uuid: job_uuid,type: 'result'})}
elsif status=='error'
format.json{render :nothing => true, status: :no_content }
else
format.json{render :nothing => true, status: :partial_content }
end
end
I have no idea how to solve that.
Tank You!
Two days to solve this. Stupid mistake.
There are two configs, development.rb and production.rb. Not that I did not know that, but usually I config in a separate initializer.
I had the Redis configured in the delvelopment and not in the production one.
Added:
redis_url = ENV["REDISTOGO_URL"] || "redis://127.0.0.1:6379/0/MyApp"
MyApp::Application.config.cache_store = :redis_store, redis_url
(based on: http://blog.jerodsanto.net/2011/06/connecting-node-js-to-redis-to-go-on-heroku/)
and it works.

Quickfixn - Tag Appears More Than Once Rejection

I'm having an issue with Quickfixn and I'm hoping someone with more experience working with it can shed some light on an issue I'm facing. For some reason, messages are getting rejected by the QuickFix engine because of repeating tags... I expect to have repeating tags and so I set the UseDataDictionary flag = Y in my config file but messages are still getting rejected. Has anyone experienced a similar issue ?
The message I'm receiving looks like :
8=FIXT.1.1 9=421 35=AE 34=8 1128=8 49=XXX 56=YYY 52=20130501-15:45:53 552=1 54=2 37=130501-5 11=NOREF 826=0 78=1 79=default 80=1000000.00 5967=12167800.00 453=4 448=ITXT 452=3 447=D 448=TEST 452=1 447=D 448=LMEB 452=16 447=D 448=FRTB 452=11 447=D 571=6718487 150=F 32=1000000.00 15=USD 1056=12167800.00 31=12.1678 194=12.1678 195=0 64=20130503 63=0 60=20130501-00:00:00 75=20130501 1057=Y 460=4 167=FOR 65=SP 55=USD/MXN 10=203
8=FIXT.1.1 9=124 35=3 34=8 49=XXX 52=20130501-15:45:54.209 56=YYY 45=8 58=Tag appears more than once 371=448 372=AE 373=13 10=210
my config file looks like this:
[DEFAULT]
ConnectionType=initiator
HeartBtInt=30
ReconnectInterval=10
SocketReuseAddress=Y
FileStorePath=D:\Store
FileLogPath=D:\Log
[SESSION]
BeginString=FIXT.1.1
SenderCompID=XXX
TargetCompID=YYY
DefaultApplVerId = FIX.5.0SP1
UseDataDictionary=Y
AppDataDictionary=D:\Interface\FIX50SP1.xml
StartDay=sunday
StartTime=20:55:00
EndTime=06:05:00
EndDay=saturday
SocketConnectHost=1.1.1.1
SocketConnectPort=8443
Any help would be greatly appreciated! Thank you.
Often this happens because there is a field in a repeating group that is not specified in the DataDictionary. The parser sees the field and assumes the repeating group has ended. It continues parsing fields as if they are not part of a group. If it sees a duplicate field in this context, the parser will report an error.
You may clone and modify FIX Data Dictionary (D:\Interface\FIX50SP1.xml) to fit your needs, if you need to process "invalid" messages. Or you may disable message validation.

SQLlite Error#2044:near 'AUTOINCEREMENT': syntax error [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
It keeps throwing error:
Error #2044: Unhandled SQLErrorEvent:. errorID=3115, operation=execute
, message=Error #3115: SQL Error. , details=near 'AUTOINCEREMENT':
syntax error;
could anyone tell me what happened ! appreciate that!
My code:
private function createTable():void
{
var sql:String="CREATE TABLE IF NOT EXISTS log("+"log_id INTEGER PRIMARY KEY AUTOINCEREMENT NOT NULL,"+
"log_date FLOAT NULL,"+"log_content TEXT NULL)";
var st:SQLStatement=new SQLStatement();
st.sqlConnection=conn;
st.text=sql;
st.execute()
}
Shouldn't it be the AUTOINCREMENT, not the AUTOINCEREMENT?
See https://www.sqlite.org/autoinc.html

Method 'add' in COM object of class 'Documents' returned error code 0x800A175D (<unknown>) which means: <unknown>

I am trying to open word Template from AX 2012 Reports. It works fine in the env I have developed but when I try to execute the same from different login I face the "COM error"
Please help.
You can always find help for these mysterious Office error codes by decoding the error code. COM error codes contain three major parts:
the top 4 bits indicate the severity of the error. 8 means "warning", one you can't ignore
the next 12 bits is the facility code, the origin of the error. 10 means "automation"
the lower 16 bits is the internal error code, the one that you really care about.
Switch your calculator to hex mode, 0x175d is error code 5981. Now turn to Google and query "word error 5981".
Lots of good hits, you can read them at your leisure. But clearly there's a problem with macros on that machine. Best left to the IT staff at your site, use superuser.com if you need more help with that.