Split CMSampleBufferRef containing Audio - cocoa-touch

I'am splitting the recording into different files while recording...
The problem is, captureOutput video and audio sample buffers doesn't correspond 1:1 (which is logical)
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
AUDIO START: 36796.833236847 | DURATION: 0.02321995464852608 | END: 36796.856456802
VIDEO START: 36796.842089239 | DURATION: nan | END: nan
AUDIO START: 36796.856456805 | DURATION: 0.02321995464852608 | END: 36796.87967676
AUDIO START: 36796.879676764 | DURATION: 0.02321995464852608 | END: 36796.902896719
VIDEO START: 36796.875447239 | DURATION: nan | END: nan
...
so i need to split the audio CMSampleBufferRef based on time and use the first segment for the first video and the second part of the buffer for the second video
It is possible to do things also with AVMutableComposition and AVAssetExportSession while exporting but the question is about the buffer level in captureOutput:, so the recorder file doesn't need more processing
Update:
Looks like 3 options, not successfully implemented yet
1) CMSampleBufferCopySampleBufferForRange
looks like CMSampleBufferCopySampleBufferForRange is the way to go, but i'am struggling to compute the last argument sampleRange...
2) CMSampleBufferCreateCopyWithNewTiming
quite lost using this one
3) looks like there is a way to trim the buffer by providing kCMSampleBufferAttachmentKey_TrimDurationAtStart, kCMSampleBufferAttachmentKey_TrimDurationAtEnd using the CMSetAttachment

Related

GPS data blocked by other tasks in the while loop

I am trying to parse GPS data while reading pressure sensor, IMU sensor and writing some data to SD card. Since reading pressure sensor, IMU sensor and writing SD card takes some time and GPS don't wait my command to send its data, I lost some GPS data so my parser can not find meaningful message. I use uart_receive interrupt to take GPS data and circular buffer to save its data. After I parse it. Since I don't know how much bytes come from GPS, I read one by one. I tried FreeRTOS but it did not work. How can I prevent other tasks to block GPS data. I am using STM32f401cc.
Here is my FreeRTOS task;
void StartDefaultTask(void* argument)
{
IMU_setParameters(&imu, &hi2c1, imu_ADD_LOW, GPIOB, GPIOB,
GPIO_PIN_1, GPIO_PIN_2);
IMU_init(&imu, &htim3);
while ((calState.accel != 3 || calState.system != 3 || calState.gyro != 3 || calState.mag != 3) && calibFlg)
{
IMU_getCalibrationState(&imu, &calState);
}
preSensor_init_default_params(&preSensor.params);
preSensor.addr = preSensor_I2C_ADDRESS_1;
preSensor.i2c = &hi2c1;
preSensor_init(&preSensor, &preSensor.params);
initSD_CARD(&USERFatFS, USERPath);
samplePacket(&telemetry);
controlRecoveryFile(&recoveryFile, "recoveryFile.txt", &telemetry);
for (;;)
{
IMU_getDatas(&imu, &calState, &linearAccel, &IMU, &imuFlg, &offsetFlg, &calibCount);
preSensor_force_measurement(&preSensor);
preSensor_read_float(&preSensor, &temperature, &pressure, &humidty);
preSensor_get_height(pressure, &height);
telemetry.Altitude_PL = height;
telemetry.Pressure_PL = pressure;
telemetry.Temperature = temperature;
telemetry.YRP[0] = IMU.yaw;
telemetry.YRP[1] = IMU.roll;
telemetry.YRP[2] = IMU.pitch;
if (calibCount % 10 == 0)
{
writoToTelemetryDatas(&logFile, "tulparLog.txt", &telemetry, 0);
if (!writeToRecoveryDatas(&recoveryFile, "recoveryFile.txt", &telemetry))
connectionFlg = 1;
}
osDelay(1);
}
}
void StartTask02(void* argument)
{
arrangeCircularBuffer(&gpsCircular, buffer, BUFFER_LENGTH);
initGPS(&huart1, &rDATA, &gps);
for (;;)
{
getGPSdata(&huart1, &gpsCircular, &gps, &rDATA);
osDelay(1);
}
}
Here is my solution to problem.
First of all I do not use FreeRtos at all. I do all the thing in main loop. Problem is "Race Condition". In my GPS data parser, there are 4 states. MSG_ID, Finish, Check, Parse. These four states do not take four loops to find meaningfull message. It depends on message length. It can be at most 103 loop. Besides, In main loop my imu sensor, pressure sensor and SD card module takes approximately 80 ms. As you know, GPS works independent from our code. It does not wait our command to send data. Every 1 second it sends its datas. Now, imagine that your GPS sends datas every 1 seconds and your CircularBuffer has 200 bytes. Your parser begin to parse message. But your parser needs at least 30+ loops to find message. Now, 30*80 = 2400 ms (2.4 s). Untill you find meaningfull data GPS sent 2 more datas and overflow happened. To fix this situation, I write for loop for my GPS parser in the main loop and I send command to GPS for just taking GPGGA and GPRMC datas (for GPS command you can look at here. I use uart_receive_ınterrupt to store data to my circularbuffer. After taking 2 '\n', I stop taking data and wait my parser to parse these datas. At the end I start uart operation taking meaningfull datas. Important thing here is calling parser in a for loop. (it can be 8-16-24 loops depends on your other tasks delay)

How can I use one media pipe graph to process multiple camera(rtsp stream)?

for example, 16 cameras, only one GPU on a server, maybe at most init 4 graph to decode,inference, than encode. So every 1 graph need to process 4 video streams. but I didn't find any config like camera_id or source_id yet in mediapipe.

AviSynth - turn sound off

I am using AviSynth+ and I play an .avs script into VLC (I've installed the AviSynth plugin for VLC).
My script is very basic and it looks like this:
DirectShowSource("D:\MyVideo.asf", fps=25, convertfps=true)
How can I turn off the sound of the video, only for the first two minutes of the video?
I am using Windows 8 - 64 bit
I know that this is a very late answer, but an easy way to do it would be to split the clip into two segments, silence the first segment with Amplify, and then rejoin them:
original = DirectShowSource("D:\MyVideo.asf", fps=25, convertfps=true)
numSeconds = 60 * 2
numFrames = Round(source.FrameRate() * numSeconds)
beginning = source.Trim(0, numFrames - 1, False)
rest = source.Trim(numFrames, 0, False)
beginning.Amplify(0.0) + rest

Issue on connecting to the Image Acquisition Device using HALCON

My setup includes a POE camera connected directly to my computer on which I have HDevelop. From the past few days I am running into a problem wherein the first attempt to connect to the camera using HDevelop fails.
When using Connect from the Image Acquisition GUI, I get an error stating "HALCON ERROR. Image acquisition: device cannot be initialized"
When using the open_framegrabber() method from the Program Console, I get a the same error as well, with the addition of the HALCON error code:5312
After I get this error, attempting the connection again, it succeeds. This is the workaround I have at the moment, but its annoying as it keeps repeating quite frequently and I am not sure what is the cause for this issue. I tried pinging my camera from the command prompt which did not show any ping losses. And using the camera from VIMBA viewer I do not get such connection issues.
I know this is not a support site where I should be asking such questions, but if anyone can give me some inputs on solving this, it would be of great help.
Regards,
Sanjay
To solve your question is important to understand the HALCON Framegrabber communication object, I assume that you are coding in HDev code structure.
To create a communication channel with the camera on the proper way, avoiding to reject the connection (due to parameter miss-configuration), you have to specify the camera device ID on the framegrabber creation, and avoid to use default options.
In order to consult, according to your communication protocol, the available devices conected to your board, use:
info_framegrabber('GigEVision2', 'info_boards', Information, ValueList)
where,
The first parameter is the communication protocol and ValueList will throw all the information of the connected devices with a token:param splitted by '|'
i.e
| device:ac4ffc00d5db_SVSVISTEKGmbH_eco274MVGE67 | unique_name:ac4ffc00d5db_SVSVISTEKGmbH_eco274MVGE67 | interface:Esen_ITF_78d004253353c0a80364ffffff00 | producer:Esen | vendor:SVS-VISTEK GmbH | model:eco274MVGE67 | tl_type:GEV | device_ip:192.168.3.101/24 | interface_ip:192.168.3.100/24 | status:busy | device:ac4ffc009cae_SVSVISTEKGmbH_eco274MVGE67 | unique_name:ac4ffc009cae_SVSVISTEKGmbH_eco274MVGE67 | interface:Esen_ITF_78d004253354c0a80264ffffff00 | producer:Esen | vendor:SVS-VISTEK GmbH | model:eco274MVGE67 | tl_type:GEV | device_ip:192.168.2.101/24 | interface_ip:192.168.2.100/24 | status:busy | device:ac4ffc009dc6_SVSVISTEKGmbH_eco274MVGE67 | unique_name:ac4ffc009dc6_SVSVISTEKGmbH_eco274MV
......... and going
In this way you can cast the device ID (device:) automatically, and put this parameter on your framegrabber creation.
open_framegrabber ('GigEVision2', 0, 0, 0, 0, 0, 0, 'default', -1, 'default', -1, 'false', 'here piut the device ID', '', -1, -1, AcqHandle)
At the end you will be able to do a direct connection or create a automatically re-connection routine.
I hope this information helps you.

serialization of Base64 string in JSON payload with HessianKit (Objective-C/Cocoa)

I'm trying to connect my iOS-App to an existing Grails backend server. The backend exposes a hessian webservice by using the grails remoting plugin (version 1.3). My Android app successfully calls all webservice methods.
My goal is to transmit a jpeg image from the phone to the server (works with the Android app). My approach is to use create a JSON object with JSONKit and include the image as a base64 encoded string. I'm using HessianKit in an XCode 4 project with ARC targeting iOS 4.2 and Nick Lockwood's NSData+Base64 categories for Base64 encoding (https://github.com/nicklockwood/Base64).
Here's my code:
NSMutableDictionary *jsonPayload = [NSMutableDictionary dictionary];
[jsonPayload setObject:[theImage base64EncodedString] forKey:#"photo"];
NSString* jsonString = [jsonPayload JSONString];
NSURL* url = server_URL;
id<BasicAPI> proxy = (id<BasicAPI>)[CWHessianConnection proxyWithURL:url protocol:#protocol(BasicAPI)];
[proxy addImage:jsonString];
The problem is that the server throws an expection when called by the app:
threw exception [Hessian skeleton invocation failed; nested exception is com.caucho.hessian.io.HessianProtocolException: addImage__1: expected string at 0x7b ({)] with root cause
Message: addImage__1: expected string at 0x7b ({)
Line | Method
->> 1695 | error in com.caucho.hessian.io.HessianInput
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 1681 | expect in ''
| 1473 | parseChar in ''
| 792 | readString in ''
| 181 | readObject in com.caucho.hessian.io.BasicDeserializer
| 1078 | readObject in com.caucho.hessian.io.HessianInput
| 300 | invoke . . in com.caucho.hessian.server.HessianSkeleton
| 221 | invoke in ''
| 886 | runTask . in java.util.concurrent.ThreadPoolExecutor$Worker
| 908 | run in ''
^ 680 | run . . . in java.lang.Thread
All other JSON payloads from my app (Strings, dates, numbers, etc.) can be deserialized by the server without any problem and the other way round, i.e. sending a base64 encoded image as json payload to the app from the server as a response also works.
After spending hours reading bug reports and mailing lists, I suspect that the problem might be that HessianKit only supports the Hessian 1 protocol but the hessian version shipped with remoting 1.3 is 4.0.7. 4.0.7 probably uses the Hessian 2 protocol and isn't compatible backwards. But that's just guessing.
EDIT: Actually, the issue has nothing to do with JSON. The same exception is thrown when I just pass the string as a normal string (and not embedded in JSON) to the webservice.
Has someone experienced a similar problem and knows a solution?