I have a CGI script which takes about 1 minute to run. Right now Apache only returns results to the browser once the process has finished.
How can I make it show the output like it was run on a terminal?
Here is a example which demonstrates the problem.
I want to see the numbers 1 to 5 appear as they are printed.
I had to disable mod_deflate to have chunk mode working with apache
I did not find another way for my cgi to disable auto encoding to gzip.
There are several factors at play here. To eliminate a few issues, Apache and bash are not buffering any of the output. You can verify with this script:
#!/bin/sh
cat <<END
Content-Type: text/plain
END
for i in $(seq 1 10)
do
echo $i
sleep 1
done
Stick this somewhere that Apache is configured to execute CGI scripts, and test with netcat:
$ nc localhost 80
GET /cgi-bin/chunkit.cgi HTTP/1.1
Host: localhost
HTTP/1.1 200 OK
Date: Tue, 24 Aug 2010 23:26:24 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.7l DAV/2
Transfer-Encoding: chunked
Content-Type: text/plain
2
1
2
2
2
3
2
4
2
5
2
6
2
7
2
8
2
9
3
10
0
When I do this, I see in netcat each number appearing once per second, as intended.
Note that my version of Apache, at least, applies the chunked transfer encoding automatically, presumably because I didn't include a Content-Length; if you return the Transfer-Encoding: chunked header yourself, then you need to encode the output of your script in the chunked transfer encoding. That's pretty easy, even in a shell script:
chunk () {
printf '%x\r\n' "${#1}" # Length of the chunk in hex, CRLF
printf '%s\r\n' "$1" # Chunk itself, CRLF
}
chunk $'1\n' # This is a Bash-ism, since it's pretty hard to get a newline
chunk $'2\n' # character portably.
However, serve this to a browser, and you'll get varying results depending on the browser. On my system, Mac OS X 10.5.8, I see different behaviors between my browsers. In Safari, Chrome, and Firefox 4 beta, I don't start seeing output until I've sent somewhere around 1000 characters (I would guess 1024 including the headers, or something like that, but I haven't narrowed it down to the exact behavior). In Firefox 3.6, it starts displaying immediately.
I would guess that this delay is due to content type sniffing, or character encoding sniffing, which are in the process of being standardized. I have tried to see if I could get around the delay by specifying proper content types and character encodings, but without luck. You may have to send some padding data (which would be pretty easy to do invisibly if you use HTML instead of plain text), to get beyond that initial buffer.
Once you start streaming HTML instead of plain text, the structure of your HTML matters too. Some content can be displayed progressively, while some cannot. For instance, streaming down <div>s into the body, with no styling, works fine, and can display progressively as it arrives. If you try to open a <pre> tag, and just stream content into that, Webkit based browsers will wait until they see the close tag to try to lay that out, while Firefox is happy to display it progressively. I don't know all of the corner cases; you'll have to experiment to see what works for you.
Anyhow, I hope this helps you get started. Let me know if you have any more questions!
Related
I have a dataset on the server1 that I want to back up to the second server2.
Server1 (original):
zfs list -o name,used,avail,refer,creation,usedds,usedsnap,origin,compression,compressratio,refcompressratio,mounted,atime,lused storage/iscsi/webhost-old produces:
NAME USED AVAIL REFER CREATION USEDDS USEDSNAP ORIGIN COMPRESS RATIO REFRATIO MOUNTED ATIME LUSED
storage/iscsi/webhost-old 67,8G 1,87T 67,8G Út kvě 31 6:54 2016 67,8G 16K - lz4 1.00x 1.00x - - 67,4G
Sending volume to the 2nd server:
zfs send storage/iscsi/webhost-old | pv | ssh -c arcfour,aes128-gcm#openssh.com root#10.0.0.2 zfs receive -Fduv pool/bkp-storage
received 69,6GB stream in 378 seconds (189MB/sec)
Server2 zfs list produces:
NAME USED AVAIL REFER CREATION USEDDS USEDSNAP ORIGIN COMPRESS RATIO REFRATIO MOUNTED ATIME LUSED
pool/bkp-storage/iscsi/webhost-old 36,1G 3,01T 36,1G Pá pro 29 10:25 2017 36,1G 0 - lz4 1.15x 1.15x - - 28,4G
Why is there such a difference in sizes? Thanks.
From what you posted, I noticed 3 things that seemed odd:
the compressratio is 1.15x on system 2, but 1.00x on system 1
on system 2, used is 1.27x higher than logicalused
the logicalused and the number zfs receive report are ~2.3x higher on system 1 than system 2
These terms are all defined in the man page, but are still confusing to reverse-engineer explanations for in practice.
(1) could happen if you enabled compression on the source dataset after you wrote all the data to it, since ZFS doesn't rewrite the data to compress it when you enable that setting. The data sent by zfs send is uncompressed unless you use -c, but system 2 will try to compress it as it runs zfs receive if the setting is enabled on the destination dataset. If both system 1 and system 2 had the same compression settings before the data was written, they would have the same compressratio as well.
(2) can happen due to metadata written along with your data, but in this case it's too high for "normal" metadata, which accounts for 1-2% of most pools. It's probably caused by a pool-wide setting, like configuring RAID-Z, or a weird combination of striping and mirroring (like 4 stripes, but with one of them being a mirror).
For (3), I re-read the man page to try to figure it out:
logicalused
The amount of space that is "logically" consumed by this dataset and
all its descendents. See the used property. The logical space
ignores the effect of the compression and copies properties, giving a
quantity closer to the amount of data that applications see.
If you were sending a dataset (instead of a single iSCSI volume) and the send size matched system 2's logicalused value (instead of system 1's), I would guess you forgot to send some child datasets (i.e. by using zfs send -R). However, neither of those are true in this case.
I had to do some additional digging -- this blog post from 2005 might contain the explanation. If system 1 didn't have compression enabled when the data was written (like I guessed above for (1)), the function responsible for not writing zeroed-out blocks (zio_compress_data) would not be run, so you probably have a bunch of empty blocks written to disk, and accounted for in the logicalused size. However, since lz4 is configured on system 2, it would run there, and those blocks would not be counted.
I have the following script. Sometimes, it runs fine and others it gets stuck. What could be wrong here?
#!/usr/bin/env expect
# set Variables
set timeout 60
set ipaddr [lindex $argv 0]
# start telnet connection
spawn telnet $ipaddr
match_max 100000
# Look for user prompt
expect "username:*"
send -- "admin\r"
expect "password:?"
# Send pass
send "thisisthepass\n"
# look for WWP prompt
expect ">"
send "sendthiscommand\r"
expect ">"
send "exit\r"
interact
The script runs fine till the end, but sometimes it gets stuck during login. This behavior is present even with the same IP: for example, it may run 1 out of 5 tries for the same IP.
I have tried adding some sleep between sending of the user and password, but it's still the same. I have also tried without expect, by sending directly the password string after the user one but still the same: sometimes the script runs fine but others it asks again for the password as if it's incorrect...
username: admin
password:
username:
Things I would do:
change send "thisisthepass\n" to send "thisisthepass\r"
include exp_internal 1 somewhere at the top of your script, and see what is going on when you have a failed attempt
exp_internal 1 will enable debugging with lots of good information on what is going on with expect's pattern matching. You can share it here and I'll be glad to take a look at it.
Are you sure the password prompt has an extra character after it (your ? in expect "password:?". Is it always there? Any chance different devices have slightly different password prompts?
I am a beginner in expect...I have written a small script which has to login to a router and execute few commands..
But somehow i am finding that even though when i have used send "admin show platform" THRICE, it is only working twice for me.. I only get the output of admin show platform twice.
Can anyone check the code and point me where actually i am screwing up the code..
Gsaxena#
Gsaxena#
Gsaxena# ./testTool
spawn /usr/bin/ksh
telnet 5.28.7.103
$ telnet 5.28.7.103
Trying 5.28.7.103...
Connected to 5.28.7.103.
Escape character is '^]'.
User Access Verification
Username:
Username: lab
Password:
RP/0/RP0/CPU0:Billorani#debug ospf ospf1 adj
Mon Oct 14 17:16:06.144 UTC
**RP/0/RP0/CPU0:Billorani#show platform**
Mon Oct 14 17:16:06.416 UTC
Node Type PLIM State Config State
------------- ----------------- ------------------ --------------- ---------------
x/x/x0 xxxxG N/A IN-RESET PWR,NSHUT,MON
**RP/0/RP0/CPU0:Billorani#show platform**
Mon Oct 14 17:16:06.416 UTC
Node Type PLIM State Config State
------------- ----------------- ------------------ --------------- ---------------
x/x/xxx0 xxxxG N/A IN-RESET PWR,NSHUT,MON
RP/0/RP0/CPU0:Billorani#
Gsaxena#
Gsaxena#
Gsaxena#
Gsaxena#
Gsaxena#
#!/usr/bin/expect
set timeout 30
set hostcut "Bil"
sleep 5
set timeout 5
spawn /usr/bin/ksh
send "telnet 5.8.7.103\r"
expect ".*\'\^\]\'\. *"
send "\r"
expect "Username\:"
send "lab\n"
expect "Password\: "
send "lab\n"
sleep 10
expect -re "RP\/.\/.*\/CPU.:$hostcut.*#"
send "debug ospf ospf1 adj\n"
expect -re "RP\/.\/.*\/CPU.:$hostcut.*#"
send "admin show platform\n"
expect -re "RP\/.\/.*\/CPU.:$hostcut.*#"
send "admin show platform\n"
expect -re "RP\/.\/.*\/CPU.:$hostcut.*#"
send "admin show platform\n"
I should really be placing this not in an actual answer but in a comment, since I do not have a final answer for you, but it seems comments can only be left by folks who have been around for some time (there's a minimum reputation before you can leave them).
Anyways, what I wanted to suggest was that you add exp_internal 1 somewhere near the start of your script. This will provide a ton of useful debugging information, and will most likely point at what is going on. Feel free to post it here if you need help with it.
I can't tell what is wrong from the information you posted... nothing seems obviously at fault. One thing I would do differently is instead of spawning a Korn shell process and then send a telnet commando to it, I would just spawn the telnet command directly (less code, less resources). But that is not what is bothering you, so never mind that.
I'm not familiar with the OS you're logging in to... is that Cisco IOS XR? I'm baffled by the fact that you issue an admin show platform command, yet only show platform shows in your stdout? Also, what's the deal with the dual asterisks (**) some prompts show, while others don't?
One last thing, which may seem dumb, but... can you manually access the device and issue those 4 exact commands, in that exact order?
Regards,
James
In your code
send "admin show platform\n"
Use "\r" instead of "\n"
While trying to convert a multipage document from a tiff to a pdf, I encountered the following problem:
↪ tiff2pdf 0271.f1.tiff -o 0271.f1.pdf
tiff2pdf: No support for 0271.f1.tiff with no photometric interpretation tag.
tiff2pdf: An error occurred creating output PDF file.
Does anybody know what causes this and how to fix it?
This is caused because one or more of the pages in the multi-page tiff does not have the photometric interpretation tag set. This is a required tag, so that means your tiffs are technically invalid (though I bet they work fine anyway).
To fix this, you must identify the page (or pages) that does not have the photometric interpretation set and fix it.
To identify the page, you can simply run something like:
↪ tiffinfo your-file.tiff
This will spit out the info for every page of your tiff. For each good page, you'll see something like:
TIFF Directory at offset 0x105c0 (67008)
Subfile Type: (0 = 0x0)
Image Width: 1760 Image Length: 2639
Resolution: 300, 300 pixels/inch
Bits/Sample: 1
Compression Scheme: CCITT Group 4
**Photometric Interpretation: min-is-white**
FillOrder: msb-to-lsb
Orientation: row 0 top, col 0 lhs
Samples/Pixel: 1
Rows/Strip: 2639
Planar Configuration: single image plane
Software: ScanFix(TM) Enhanced ImageGear Version: 11.00.024
DateTime: Mon Oct 31 15:11:07 2005
Artist: 1996-2001 AccuSoft Co., All rights reserved
If you have a bad page, it'll lack the photometric interpretation section, and you can fix it with:
↪ tiffset -d $page-number -s 262 0 your-file.tiff
Note that the value of zero is the default for the photometric interpretation key, which is 262. You can see the other values for this key at the link above.
If your tiff has a lot of pages (like mine does), you may not be able to easily identify the bad page by eye. In that case, you can take a brute force approach, setting the photometric interpretation for all pages to the default value.
# First, split the tiff into many one-page files
↪ tiffsplit your-file.tiff
# Then, set the photometric interpretation to the default for all pages
↪ find . -name '*.tiff' -exec tiffset -s 262 0 '{}' \;
# Then rejoin the pages
↪ tiffcp *.tiff -o out-file.tiff
Lot of dummy work, but gets the job done.
I am trying to use TAP(Test Anything Protocol) as our testing result format. However, there are some logs files are needed to attach to test result. I am looking for a good practice to achieve this.
For example, I have a tap file and two log files : a.log, b.log
1..1
ok 1 - sample.MyFirstTest#testCurrentTime
---
message: Hello
logfile: a.log, b.log
...
Is there any good way to insert log file content into this tap file ? Thanks.
Yamlish is the solution. We can embedded file content into tap file. The file content can be encoded as base64, there is a tap example:
1..2
not ok 1
---
Extensions:
File-Name: test.log
File-Type: text/plain
File-Content: VGhpcyBpcyBhIGxvZyAK
...
ok 2 # SKIP test 1 failed