Minimum size of .ts file - testing

I'm writing a script to test some .ts files. At this point, I want to judge if each .ts file has any content. So I need to know the minimum size of a 360p quality .ts file (let's say it's just 0.00001s). So can anyone tell me the minimum size of a .ts file in 360p quality? Or is it just 0Byte?

0 bytes is "valid" in that it is a TS file that exists, but does not contain content. The minimum size of a 'parseable' TS file will be 188 bytes. TS is broken into 188 bytes packets and padded if smaller. But a 188 byte TS file will not be playable. You at least need a PAT and PMT. But it still does not contain any video (or audio) The smallest video frame I have ever created was 603 bytes (64x64 pix) here. Plus we need at minimum TS header(4) + AF/PCR(8) + PS Header w/ PTS(13). 603 + 4 + 8 + 13 = 628 / 188 =~ 3.34. Rounded up to 4 packets plus PAT and PMT. 188 * 6 = 1128 bytes. A single audio packet will not likely take more that one packet, so add another 188 for that.

Related

number of bits in an RGB image

Im chanced upon this statement while studying the allocation of RAM in embedded device.
Basically suppose we have an image sensor that uses RGB 5-6-5 format. It captures an image size of 320x240. The author proceeds to use
"There are two 150-KB data buffers that contain the raw data from the image sensor (320x240 in the
RGB 5-6-5 format). "
Does anyone know how is two 150KB data buffers enough to store the raw image? How can i calculate the image bits?
I tried calculating
( 2^5 * 2^6 * 2^5 * 320 * 240 ) * 0.000125 = 629145.6 // in KB.
You should look closer at the definition of the RGB 5:6:5 format. Each color takes up 2 bytes (5 bits for red, 6 bits for green and 5 bits for blue; adding up to 16 bits == 2 bytes), so a raw 320x240 picture takes 320 * 240 * 2 bytes, i.e. 153600 bytes or 150 KB.

Reading avi file, getting single frames with cv2.VideoCapture and video.read, and the png files of the single frames are much bigger than the avi

I'm reading an avi file with approx 2MB size, 301 frames, 20 frames/sec (15 sec long video) and a size of 1024 * 1096 per frame.
When I'm reading the single frames with cv2 and resaving them in original size as png, then I'm getting a size of approx 600KB per picture/frame. So, I have in total 301 * 600KB = 181MB (original avi had 2MB).
Any idea why this is happening and how to reduce the file size of the single frames without changing the resolution? Idea is to somehow generate single frames from the original video, do detections with CNN and to resave the original video again with included detections and the output video shall be somehow very similar to input video (approx same file size, must not be avi format)
PNG files or single frames are in the most cases always larger than the original video file (compressed in the most cases by a codec https://www.fourcc.org/codecs.php). Use for example the following command on Linux to create a compressed avi:
ffmpeg -i FramePicName%d.png -vcodec libx264 -f avi aviFileName
You can get the used codec to create the original video file by the following python cv2 code
cap = cv2.VideoCapture(videoFile)
fourcc = cap.get(cv2.CAP_PROP_FOURCC) # or cv2.cv.CV_CAP_PROP_FOURCC
"".join([chr((int(fourcc) >> 8 * i) & 0xFF) for i in range(4)])

How to get FLAC frame length

I'm study FLAC decode problem, but can't figure out how to get FLAC frame length. Please help.
https://xiph.org/flac/format.html
I docoded METADATA_BLOCK_STREAMINFO, and get below data:
mMinBlock: 4096
mMaxBlock: 4096
mMinFrame: 1201
mMaxFrame: 12804
mSampleRate: 44100
mBitPerSample: 16
mTotalSample: 14170212
Then I start to analyse first Frame, below is the info from first Frame header:
isFixBlock = true
blockSize = 12
sampleRate = 9
channel = 10
sampleSize = 4
number = 0
Blocking strategy is fixed-blocksize;
Block size: 1100, it means 256 * (2^(12-8)) samples = 4096 samples;
Sample rate: 1001 : 44.1kHz;
Channel: 2;
Sample size: 100 : 16 bits per sample;
So from above infomation, we'll know this frame has 4096 samples, and sample size is 16 bits per sample. That means this frame length is at least(ignore subframe header and frame footer,etc.) 4096 * 16 / 8 = 8192 bytes. But if I check the FLAC file manually, the offset gap of first and second frame is only 2976 bytes, this means the frame length of first frame is only 2976 bytes. Is there anything wrong for my calculation?
My purpose is get frame offset and frame length of every frame, is there any good way? I know there is sync code 0xFF F8, but it's very low efficiency.
Thanks in advance!
From http://lists.xiph.org/pipermail/flac-dev/2016-February/005845.html
The frame length you calculated (8192 bytes) is that of the decoded
frame, not of the FLAC frame. As it is compressed, it should be indeed
smaller than 8192 bytes.
There is no direct way to find the frame length except finding where the
next frame starts.

Ext4 FS: max file size

If I am not wrong, than with triple indirect addressing, the maximum file size for ext3 would be (4G+4M+4K).
Likewise, what will be the the maximum file size for an ext4 FS using extents if we assume a 4KB disk block size?
BS = Block size = 4 KiB
You missed the 12 direct pointers in your calculation for ext3:
(12*BS = 48 KiB) + 4 MiB + 4 GiB + 4 TiB
For ext4, we use 32-bit indexing for the extents:
2^32 * (BS = 2^12) = 2^44 = 16 TiB
which is the number you would e.g. also find on Wikipedia for the maximum ext4 file size.

How to read no. pixels per res. unit in TIFF header

I'm trying to read a TIFF image that has been exported from a Leica (SP5) program. I can read other details (e.g. bits per sample, image size x, image size y) as per tags defined in TIFF documentation. I'm sort of crudely reading the header out as unsigned integers until I get to a certain tag number.
I know at 296, my 'Resolution Unit' is cm. At 282 and 283, it's supposed to give me the number of pixels (in x and y) per resolution unit. I'm not sure how to do this. Can someone please help??
Well, if at 296 you discover what the unit type is (either 1 - No absolute unit, 2 - Inch, or 3 - Centimeter) and at 282 and 283 you get XResolution and YResolution respectively then you have everything you need to solve the problem.
To get a per unit type measure just multiply XResolution and YResolution together:
XResolution * YResolution = PixelsPerUnit
Since you are trying to find the area of the rectangle created by the resolutions.