I want to make it easier to find logs for a specific user and I want to only grep/filter the sections which contain the string 15120000000. A section is a line starting with time stamp (Sep 16 19:31:46 in this example. Or it could be everyline starting with "Sep 16"). Is it possible to use grep or awk for this? Thanks in advance for all the help.
Here is the log sample.
Sep 16 19:31:46 da1psbc05pev kamailio[31135]: INFO: <script>: onreply_route Rcvd [487] response from[25.11.214.107:5061] MsgId[6384483] From[sip:15120000000#3bc.cloud.comptel.com] callid[f91a7279-15751118-6c5f65af#192.168.1.58]
SIP/2.0 487 Request Terminated
From: "TAC Offnet"<sip:15120000000#3bc.cloud.comptel.com>;tag=B251F2AD-C3A0DEEC
To: <sip:0001#3bc.cloud.comptel.com;user=phone>;tag=dc25d928-0-13c4-6006-cb98c-2310cd2b-cb98c
Call-ID: f91a7279-15751118-6c5f65af#192.168.1.58
CSeq: 1 INVITE
Via: SIP/2.0/TLS 25.11.214.72:5061;alias;rport=54928;branch=z9hG4bKbbd9.62d12aaf57f6574fd99b719735009993.0;i=4e0f7
Via: SIP/2.0/TLS 192.168.1.58:36304;received=199.199.199.122;rport=36304;branch=z9hG4bK47084e2320E46012
Supported: timer,replaces,info
User-Agent: compGear/21.79.9310.0 (compTel 15)
Content-Length: 0
Sep 16 19:31:46 DaHostname kamailio[31135]: : <core> [msg_translator.c:553]: lump_check_opt(): ERROR: lump_check_opt: null send socket
Sep 16 19:31:46 DaHostname kamailio[31135]: : <core> [msg_translator.c:553]: lump_check_opt(): ERROR: lump_check_opt: null send socket
Sep 16 19:31:46 DaHostname kamailio[31135]: INFO: <script>: onsend_route Dumping MsgId[6384483] sending to[199.199.199.122:36304] size[556] callid[f91a7279-15751118-6c5f65af#192.168.1.58]
SIP/2.0 487 Request Terminated
Record-Route: <sip:25.11.214.72:5061;transport=tls;transport=tls;lr=on>
From: "TAC Offnet"<sip:15120000000#3bc.cloud.Comptel.com>;tag=B251F2AD-C3A0DEEC
To: <sip:0000#3bc.cloud.Comptel.com;user=phone>;tag=dc25d928-0-13c4-6006-cb98c-2310cd2b-cb98c
Call-ID: f91a7279-15751118-6c5f65af#192.168.1.58
CSeq: 1 INVITE
Via: SIP/2.0/TLS 192.168.1.58:36304;received=199.199.199.122;rport=36304;branch=z9hG4bK47084e2320E46012
Supported: timer,replaces,info
User-Agent: CompGear/21.79.9310.0 (CompTel 15)
Content-Length: 0
Sep 16 19:31:46 DaHostname kamailio[31141]: INFO: <script>: onreply_route Rcvd OPTIONS [200] response from[201.125.123.125:46518] MsgId[6331041] From[sip:sips:3] callid[CompTel_1474068706-1077690787]
Assuming sections are separated with one or more blank lines
$ awk -v RS= '/15120000000/' file
should do
Thanks for the answer. This gave me exactly what I wanted.
awk 'BEGIN{RS="Sep 16"; ORS="Sep"} /5120000000/ {print}' /var/log/log
I used date part as of time stamp as record separator (Sep 16) and then selected records based on the number 512000000. So that part is working pretty nice.
Now, is there a way I can make it dynamic? How can I make RS to select any value from Jan to Dec?. Thanks.
Related
I have a ros bag and its information as following
path: zed.bag
version: 2.0
duration: 3:55s (235s)
start: Nov 12 2014 04:28:20.90 (1415737700.90)
end: Nov 12 2014 04:32:16.65 (1415737936.65)
size: 668.3 MB
messages: 54083
compression: none [848/848 chunks]
types: sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
sensor_msgs/CompressedImage [8f7a12909da2c9d3332d540a0977563f]
tf2_msgs/TFMessage [94810edda583a504dfda3829e70d7eec]
topics: /stereo_camera/left/camera_info_throttle 3741 msgs : sensor_msgs/CameraInfo
/stereo_camera/left/image_raw_throttle/compressed 3753 msgs : sensor_msgs/CompressedImage
/stereo_camera/right/camera_info_throttle 3741 msgs : sensor_msgs/CameraInfo
/stereo_camera/right/image_raw_throttle/compressed 3745 msgs : sensor_msgs/CompressedImage
/tf 39103 msgs : tf2_msgs/TFMessage (2 connections)
I can extract images by following
http://wiki.ros.org/rosbag/Tutorials/Exporting%20image%20and%20video%20data
but issue occurs when I want to get camera info, Do anyone know how to solve it?
One can solve it via echoing the text-based information into a file using rostopic:
rostopic echo -b zed.bag /stereo_camera/left/camera_info_throttle > data.txt
I am looking for a way to show the results of the file "tcp-variants-comparison.cc" under ns3 (3.28) used with Ubuntu 18.04.
I found here an old topic from 2013, but it seems not to work correctly in my current environment.
P.S: I am a newbie in ns3, so i will appreciate any help.
regards
cedkhader
Running ./waf --run "tcp-variants-comparison --tracing=1" yields the following files:
-rw-rw-r-- 1 112271415 Aug 5 15:52 TcpVariantsComparison-ascii
-rw-rw-r-- 1 401623 Aug 5 15:52 TcpVariantsComparison-cwnd.data
-rw-rw-r-- 1 1216177 Aug 5 15:52 TcpVariantsComparison-inflight.data
-rw-rw-r-- 1 947619 Aug 5 15:52 TcpVariantsComparison-next-rx.data
-rw-rw-r-- 1 955550 Aug 5 15:52 TcpVariantsComparison-next-tx.data
-rw-rw-r-- 1 38 Aug 5 15:51 TcpVariantsComparison-rto.data
-rw-rw-r-- 1 482134 Aug 5 15:52 TcpVariantsComparison-rtt.data
-rw-rw-r-- 1 346427 Aug 5 15:52 TcpVariantsComparison-ssth.data
You can use other command line arguments to generate the desired output, see list below.
Program Arguments:
--transport_prot: Transport protocol to use: TcpNewReno, TcpHybla, TcpHighSpeed, TcpHtcp, TcpVegas, TcpScalable, TcpVeno, TcpBic, TcpYeah, TcpIllinois, TcpWestwood, TcpWestwoodPlus, TcpLedbat [TcpWestwood]
--error_p: Packet error rate [0]
--bandwidth: Bottleneck bandwidth [2Mbps]
--delay: Bottleneck delay [0.01ms]
--access_bandwidth: Access link bandwidth [10Mbps]
--access_delay: Access link delay [45ms]
--tracing: Flag to enable/disable tracing [true]
--prefix_name: Prefix of output trace file [TcpVariantsComparison]
--data: Number of Megabytes of data to transmit [0]
--mtu: Size of IP packets to send in bytes [400]
--num_flows: Number of flows [1]
--duration: Time to allow flows to run in seconds [100]
--run: Run index (for setting repeatable seeds) [0]
--flow_monitor: Enable flow monitor [false]
--pcap_tracing: Enable or disable PCAP tracing [false]
--queue_disc_type: Queue disc type for gateway (e.g. ns3::CoDelQueueDisc) [ns3::PfifoFastQueueDisc]
--sack: Enable or disable SACK option [true]
in ns3.36.1 I used this command
./ns3 run examples/tcp/tcp-variants-comparison.cc -- --tracing=1
and output look like this
TcpVariantsComparison-ascii
TcpVariantsComparison-cwnd.data
TcpVariantsComparison-inflight.data
TcpVariantsComparison-next-rx.data
TcpVariantsComparison-next-tx.data
TcpVariantsComparison-rto.data
TcpVariantsComparison-rtt.data
TcpVariantsComparison-ssth.data
I have multipart form in my Sails.js project which submits 2 different files (first audio then image) along with some text params. In most cases with rather small files, everything works fine. But when I tried a bigger audio file (33MB) I got empty files array for my image field in the receiver.
Here is some code.
The Controller:
var uploadParamNames = ['audio', 'image'];
async.map(uploadParamNames,
function (file, cb) {
sails.log(req.file(file)._files)
req.file(file).upload(
{
adapter: require('skipper-gridfs'),
uri: sails.config.connections.mongoConnection.url + '.' + file
},
function (err, files) {
// save the file, and then:
return cb(err, files);
});
}, function doneUploading(err, files) {
...
});
Basicaly here I get the following logs for audio and image:
[ { stream: [Object], status: 'bufferingOrWriting' } ]
[]
I tried a debug and found that in case of image field it never reached the line where the file is actually written in prototype.onFile.js line up.writeFile(part);.
Also the debug log prints the following:
Parser: Read a chunk of textparam through field `_csrf`
Parser: Read a chunk of textparam through field `ss-name`
Parser: Read a chunk of textparam through field `ss-desc`
Parser: Read a chunk of textparam through field `ss-category`
Parser: Read a chunk of textparam through field `ss-language`
Parser: Read a chunk of textparam through field `ss-place`
Parser: Read a chunk of textparam through field `ss-place-lat`
Parser: Read a chunk of textparam through field `ss-place-lon`
Acquiring new Upstream for field `audio`
Tue, 13 Oct 2015 10:52:54 GMT skipper Set up "maxTimeToWaitForFirstFile" timer for 10000ms
Tue, 13 Oct 2015 10:52:58 GMT skipper passed control to app because first file was received
Tue, 13 Oct 2015 10:52:58 GMT skipper waiting for any text params
Upstream: Pumping incoming file through field `audio`
Parser: Done reading textparam through field `_csrf`
Parser: Done reading textparam through field `ss-name`
Parser: Done reading textparam through field `ss-desc`
Parser: Done reading textparam through field `ss-category`
Parser: Done reading textparam through field `ss-language`
Parser: Done reading textparam through field `ss-tags`
Parser: Done reading textparam through field `ss-place`
Parser: Done reading textparam through field `ss-place-lat`
Parser: Done reading textparam through field `ss-place-lon`
Tue, 13 Oct 2015 10:53:11 GMT skipper Something is trying to read from Upstream `audio`...
Tue, 13 Oct 2015 10:53:11 GMT skipper Passing control to app...
Tue, 13 Oct 2015 10:53:16 GMT skipper maxTimeToWaitForFirstFile timer fired- as of now there are 1 file uploads pending (so it's fine)
debug: [ { stream: [Object], status: 'bufferingOrWriting' } ]
Tue, 13 Oct 2015 10:53:41 GMT skipper .upload() called on upstream
Acquiring new Upstream for field `image`
Tue, 13 Oct 2015 10:53:46 GMT skipper Set up "maxTimeToWaitForFirstFile" timer for 10000ms
debug: []
Not sure why, but it seems the control is already passed to the app before image file is written. Again this only happens with a larger audio file. Is there a way to fix this?
EDIT:
More debugging showed that the receivedFirstFileOfRequest listener is called before image file is written. Which is logical, because it actually listens for first file upload, but what to do with next files?
EDIT:
Ah... the file doesn't need to bee very large at all. A 29KB file passes and a 320KB does not...
I'm trying to load a local file into BigQuery via the API, and it is failing. The file size is 98 MB and as a bit over 5 million rows. Note that I have loaded tables with the same number of rows and slightly bigger file size without problems in the past.
The code I am using is exactly the same as the one in the API documentation, which I have used successfully to upload several other tables. The error I get is the following:
Errors:
Line:2243530, Too few columns: expected 5 column(s) but got 3 column(s)
Too many errors encountered. Limit is: 0.
Job ID: job_6464fc24a4414ae285d1334de924f12d
Start Time: 9:38am, 7 Aug 2012
End Time: 9:38am, 7 Aug 2012
Destination Table: 387047224813:pos_dw_api.test
Source URI: uploaded file
Schema:
tbId: INTEGER
hdId: INTEGER
vtId: STRING
prId: INTEGER
pff: INTEGER
Note that the same file loads just fine from CloudStorage (dw_tests/TestCSV/test.csv), so the problem cannot be the one reported about one line having fewer columns, as it would fail from CloudStorage too, and I have also checked that all the rows have the correct format.
The following jobs have the same problem, the only difference is the table name and the name of the fields in the schema are different (but it is the same data file, fields and types). In those attempts it claimed a different row in trouble:
Line:4288253, Too few columns: expected 5 column(s) but got 4 column(s)
The jobs are the following:
job_cbe54015b5304785b874baafd9c7e82e load FAILURE 07 Aug 08:45:23 0:00:34
job_f634cbb0a26f4404b6d7b442b9fca39c load FAILURE 06 Aug 16:35:28 0:00:30
job_346fdf250ae44b618633ad505d793fd1 load FAILURE 06 Aug 16:30:13 0:00:34
The error that the Python script returns is the following:
{'status': '503', 'content-length': '177', 'expires': 'Fri, 01 Jan 1990 00:00:00 GMT', 'server': 'HTTP Upload Server Built on Jul 27 2012 15:58:36 (1343429916)', 'pragma': 'no-cache', 'cache-control': 'no-cache, no-store, must-revalidate', 'date': 'Tue, 07 Aug 2012 08:36:40 GMT', 'content-type': 'application/json'}
{
"error": {
"errors": [
{
"domain": "global",
"reason": "backendError",
"message": "Backend Error"
}
],
"code": 503,
"message": "Backend Error"
}
}
This looks like there may be an issue at BigQuery. How can I fix this problem?
The temporary files were still around for this import, so I was able to check out the file we tried to import. For job job_6464fc24a4414ae285d1334de924f12d, the last lines were:
222,320828,bot,2,0
222,320829,bot,4,3
222,320829,
It looks like we dropped part of the input file at some point... The input specification says that the MD5 hash should be 58eb7c2954ddfa96d109fa1c60663293 but our hash of the data is 297f958bcf94959eae49bee32cc3acdc, and file size should be 98921024, but we only have 83886080 bytes.
I'll look into why this is occurring. In the meantime, imports though Google Storage use a much simpler path and should be fine.
I have a CGI script which takes about 1 minute to run. Right now Apache only returns results to the browser once the process has finished.
How can I make it show the output like it was run on a terminal?
Here is a example which demonstrates the problem.
I want to see the numbers 1 to 5 appear as they are printed.
I had to disable mod_deflate to have chunk mode working with apache
I did not find another way for my cgi to disable auto encoding to gzip.
There are several factors at play here. To eliminate a few issues, Apache and bash are not buffering any of the output. You can verify with this script:
#!/bin/sh
cat <<END
Content-Type: text/plain
END
for i in $(seq 1 10)
do
echo $i
sleep 1
done
Stick this somewhere that Apache is configured to execute CGI scripts, and test with netcat:
$ nc localhost 80
GET /cgi-bin/chunkit.cgi HTTP/1.1
Host: localhost
HTTP/1.1 200 OK
Date: Tue, 24 Aug 2010 23:26:24 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.7l DAV/2
Transfer-Encoding: chunked
Content-Type: text/plain
2
1
2
2
2
3
2
4
2
5
2
6
2
7
2
8
2
9
3
10
0
When I do this, I see in netcat each number appearing once per second, as intended.
Note that my version of Apache, at least, applies the chunked transfer encoding automatically, presumably because I didn't include a Content-Length; if you return the Transfer-Encoding: chunked header yourself, then you need to encode the output of your script in the chunked transfer encoding. That's pretty easy, even in a shell script:
chunk () {
printf '%x\r\n' "${#1}" # Length of the chunk in hex, CRLF
printf '%s\r\n' "$1" # Chunk itself, CRLF
}
chunk $'1\n' # This is a Bash-ism, since it's pretty hard to get a newline
chunk $'2\n' # character portably.
However, serve this to a browser, and you'll get varying results depending on the browser. On my system, Mac OS X 10.5.8, I see different behaviors between my browsers. In Safari, Chrome, and Firefox 4 beta, I don't start seeing output until I've sent somewhere around 1000 characters (I would guess 1024 including the headers, or something like that, but I haven't narrowed it down to the exact behavior). In Firefox 3.6, it starts displaying immediately.
I would guess that this delay is due to content type sniffing, or character encoding sniffing, which are in the process of being standardized. I have tried to see if I could get around the delay by specifying proper content types and character encodings, but without luck. You may have to send some padding data (which would be pretty easy to do invisibly if you use HTML instead of plain text), to get beyond that initial buffer.
Once you start streaming HTML instead of plain text, the structure of your HTML matters too. Some content can be displayed progressively, while some cannot. For instance, streaming down <div>s into the body, with no styling, works fine, and can display progressively as it arrives. If you try to open a <pre> tag, and just stream content into that, Webkit based browsers will wait until they see the close tag to try to lay that out, while Firefox is happy to display it progressively. I don't know all of the corner cases; you'll have to experiment to see what works for you.
Anyhow, I hope this helps you get started. Let me know if you have any more questions!