Size of "Bytes sent" in apache log file - apache

I added to my apache log file another information-> %O which is indicated to bytes sent to user including headers. Here is my question how to count the size of HEADERS ? I have already tried $ENV{'CONTENT_LENGTH'} but it isn't it.
I believe there must be the way to determine HEADERS size from CGI script but as for now have no idea how.
Thanks for help in advance ;)

%b logs the size of bytes without headers, so you can calculate the diff to get the header size.

Related

Hash 'hashcat': Token length exception

hashcat64.exe hashcat -m0 -a0 crackme.txt password.txt
Device #1: Intel's OpenCL runtime(GPU only) is currently broken. We
are waiting for updated OpenCL drivers from Intel
Hash 'hashcat': Token length exception No hashes loaded.
I'm getting this message. I've attached a snapshot of my CL.
I've looked for any spaces in the hash directory and its format.
I've also tried changing all the Unicode formats of the .txt file.
Nothing seems to work. I've also updated the intel drivers.\
Can anyone help please. Thanks in advance.
I think you should look end of each line in your hash password containing files. If spaces are at there end of lines then you will get an error "token length exception" or "No hashes loaded". Just remove those spaces and then try.
For anyone looking into this : I used two rules, you can use many of others to increase the efficiency.
hashcat64.exe hashcat -m0 -a0 crackme.txt password.txt -r rules/best64.rule
or
hashcat64.exe hashcat -m0 -a0 crackme.txt password.txt -r rules/d3ad0ne.rule
This error can also occur if the hash file is not found. Note that the restore file effectively encodes the absolute path to the hash file, so this error can occur if it has moved when attempting to resume. (technically it saves the potentially relative path as specified when originally run, but it also saves the original working directory and cds there first)

How can I pass a 'unicode string' to os.environ within a wsgi.py

apache2
wsgi
VHOST
python3
If I try to set envvars as part of my wsgi.py I do run into problems if values contain non-ascii characters.
Traceback (most recent call last):
File "/home/vagrant/pyvenv/lib/python3.5/site-packages/absys/config/wsgi.py", line 13, in <module>
os.environ['DJANGO_TESTVAR'] = 'M\xc3\xb6\xc3\xb6\xc3\xb6\xc3\xb6'
File "/usr/lib/python3.5/os.py", line 730, in __setitem__
value = self.encodevalue(value)
File "/usr/lib/python3.5/os.py", line 799, in encode
return value.encode(encoding, 'surrogateescape')
UnicodeEncodeError: 'ascii' codec can't encode characters in position 1-4: ordinal not in range(128)
When I try to do the same thing as a regular user or as root it works flawlessly. This seems to be due to the fact that os.environ does accept the passed unicode value ('Müüü') and does not try to encode it.
For a reason not understood the same does not seem to be true when run as part of wsgi.py
For a second I thought this question could provide an answer but setting LANG = de_DE.UTF-8in /etc/apache2/envvars did not change a thing.
I tried to read pretty much most of the resources around on django/wsgi/envvars and in particular Graham Dumpletons approach
but none of them seem to mention any encoding issues.
I guess, my question (governed by my understanding so far) boils down to:
"What governs os.environs encoding behaviour and how to influence it within the wsgi process?
If there is any additional information I can provide to aid finding an answer please let me know.
This answer is just a reiteration of Graham Dumpleton's most helpfull comment. All credit is his.
This problem most likely is the result of messed up locale setting in the wsgi-processes environment.
In case your mod_wsgi is run as its own dedicated deamon (as it most likely should) you can pass it the desired locale directly and hence avoid any issues due to how your distribution may handle apaches environment.
For this something along these lines should do the trick:
WSGIDaemonProcess my-django-site lang='en_US.UTF-8' locale='en_US.UTF-8'.
For a more elaborate explaination please read Grahams excellent blog post and refer to mod_wsgi's documentation.

Row larger than the maximum allowed size

I have successfully imported many gzipped JSON files on several occasions. For the two files BQ import choked. Both files reported the same error:
File: 0 / Offset:0 / Line:1 / Column:20971521, Row larger than the maximum allowed size
Now I've read about the row limit of 20MB and I understand that the number above is 20MB +1 but what really bugs me is that the meaning is totally off. My GZs have millions of JSONs (each on a new line). I have written a script to measure the longest line (longest JSON) in the failed GZ file and found it to be 103571 bytes. Why is the BQ import choking then?
I have inspected the longest JSON and it looks perfectly normal. How should I interpret the error? How can I fix it?
Why is BQ thinking the import is on line 1, column 20971521 when there are millions of lines in the file?
All your investigations are correct, but you must check your file as new lines are not identified, and BQ seas all the import as a large line.
That's why it reports column 20971521 for the problem.
You should try importing a sample from the file.
Some of the answers here gave me an idea so I went on a tried it. It appears as if for some strange reason BQ didn't like line endings so I wrote a quick script to rewrite the original input file to use line endings. Automagically the import worked!
This is utterly strange considering I already imported many GBs of data with pure line endings.
I am happy that it worked but I could never guess why. I hope this helps someone else.

Internal error while loading to Bigquery table

I ran this command to load 11 files to a Bigquery table:
bq load --project_id=ardent-course-601 --source_format=NEWLINE_DELIMITED_JSON dw_test.rome_defaults_20140819_test gs://sm-uk-hadoop/queries/logsToBq_transformLogs/rome_defaults/20140819/23af7218-617d-42e8-884e-f213a583094a/part* /opt/sm-analytics/projects/logsTobqMR/jsonschema/rome_defaultsSchema.txt
I got this error:
Waiting on bqjob_r46f38146351d545_00000147ef890755_1 ... (11s) Current status: DONE
BigQuery error in load operation: Error processing job 'ardent-course-601:bqjob_r46f38146351d545_00000147ef890755_1': Too many errors encountered. Limit is: 0.
Failure details:
- File: 5: Unexpected. Please try again.
I tried many times after that and still got the same error.
To debug what went wrong, I instead load each file one by one to the Bigquery table. For example:
/usr/local/bin/bq load --project_id=ardent-course-601 --source_format=NEWLINE_DELIMITED_JSON dw_test.rome_defaults_20140819_test gs://sm-uk-hadoop/queries/logsToBq_transformLogs/rome_defaults/20140819/23af7218-617d-42e8-884e-f213a583094a/part-m-00011.gz /opt/sm-analytics/projects/logsTobqMR/jsonschema/rome_defaultsSchema.txt
There are 11 files total and each ran fine.
Could someone please help? Is this a bug on Bigquery side?
Thank you.
There was an error reading one of the files: gs://...part-m-00005.gz
Looking at the import logs, it appears that the gzip reader encountered an error decompressing the file.
It looks like that file may not actually be compressed. BigQuery samples the header of the first file in the list to determine whether it is dealing with compressed or uncompressed files and to determine the compression type. When you import all of the files at once, it only samples the first file.
When you run the files individually, bigquery reads the header of the file and determines that it isn't actually compressed (despite having the suffix '.gz') so imports it as a normal flat file.
If you run a load that doesn't mix compressed and uncompressed files, it should work successfully.
Please let me know if you think this is not the case and I'll dig in some more.

How to use # symbol in HTML in a CGI script

Sure a very simple question but I can't seem to find the terminology to find the answer in a search!
I'm using a file-uploader CGI script. Inside the CGI script is some code that generates some HTML. In the HTML I need to put an email address using the # symbol, however this breaks the script. What is the correct way to escape the # symbol in a CGI script?
The error when using the # symbol is:
"FileChucker: load_external_prefs(): Error processing your prefs file ('filechucker_prefs.cgi'): Global symbol "#email" requires explicit package name at (eval 16) line 1526."
Many thanks for any help
Update..
Hi All, many thanks for the replies - I guess it is perl.. (shows my ignorance of what's going on here perfectly!). The code below shows the problem the # in 'email#domain.com'.
'test$PREF{app_output_template} = qq`
%%%ifelse-onpage_uploader%%%
<div id="fcintro">If you're using a mobile or tablet and have problems uploading, we recommend emailing your CV to: email#domain.com<br><span class"upload_limits">We can accept Adobe PDF, Microsoft Word and all popular image and text file types. (max total upload size: 7MB)</span></div>
%%%else%%%
%%subtitle%%
%%%endelse-onpage_uploader%%%
%%app_output_body%%'
Try # instead of #.
Reference: http://www.w3schools.com/tags/ref_ascii.asp