How to redirect console output into a file in VxWorks shell? - vxworks

In Windows or Linux, it's very often we redirect console output to a file as below:
Windows:
dir > text
Linux:
ls -l > text
I wonder how to do the similar thing in VxWorks shell.

You can try something like this:
-> saveFd = open("myfile.txt",0x102, 0777 )
-> oldFd = ioGlobalStdGet(1)
-> ioGlobalStdSet(1, saveFd)
-> runmytest()
...
-> ioGlobalStdSet(1, oldFd)
This will redirect all code to the file that you have opened, this case myfile.txt

I changed "0x102" in "saveFd = open("myfile.txt",0x102, 0777 )" into "0x202", then it works. All console display was redirected into "myfile.txt".
In previous post, I got a mistake. I thought it hung after "ioGlobalStdSet(1, saveFd)". It's not hung, but redirecting all display into "myfile.txt" and I used "CTL-C" to stop the redirection.

The following code snippets show how to save all serial console output to a file.
Try -> tyco0_write_to_file 4096
#include <tyLib.h>
#include <private/iosLibP.h>
int tyco0_log_max_size = -1;
int tyco0_log_fd = -1;
int tyco0_write_hook(TY_DEV_ID pTyDev, char *buffer, int nbytes)
{
FD_ENTRY *p_fd_entry;
static int bytes_written = 0;
if ('0' == pTyDev->devHdr.name[6]) { /* /tyCo/0 */
if ((bytes_written+nbytes)<tyco0_log_max_size) {
(void)write(tyco0_log_fd,buffer,nbytes);
bytes_written += nbytes;
}
else {
(void)write(tyco0_log_fd,buffer,tyco0_log_max_size-bytes_written);
p_fd_entry = iosFdMap(1);
p_fd_entry->pDrvEntry->de_write = tyWrite;
close(tyco0_log_fd);
bytes_written = 0;
}
}
return tyWrite(pTyDev,buffer,nbytes);
}
int tyco0_write_to_file(int file_max_size)
{
FD_ENTRY *p_fd_entry;
p_fd_entry = iosFdMap(1); /* /tyCo/0 */
if (NULL==p_fd_entry) {
perror("iosFdMap");
return -1;
}
tyco0_log_fd = open("/ram/tyco0.log",O_CREAT|O_RDWR,0);
if (tyco0_log_fd == -1) {
perror("open");
return -1;
}
if (file_max_size <= 0) {
file_max_size = 1024;
}
tyco0_log_max_size = file_max_size;
if (p_fd_entry->pDrvEntry->de_write == tyWrite) {
p_fd_entry->pDrvEntry->de_write = tyco0_write_hook;
}
else {
printf("tyWrite not found\n");
close(tyco0_log_fd);
return -1;
}
return 0;
}

Related

How to save the output of ifconfig command into a buffer?

I have to save the output of the command "ifconfig" into a char buffer using C++ and VxWorks.
How can I do it?
ifconfig is a shell command, so you should be able to redirect its output to a file using '>' and then read that file.
You can also have a look at the topic 'Redirecting Shell IO' in the manual.
Here is an example using pipe to save the output of ifconfig into a buffer.
Try -> pipe_test and -> puts &pipe_buf on the C interpreter shell. Good luck.
char pipe_buf[128*256];
int pipe_test()
{
char *pipe_name = "pipe01";
int pipe_fd;
int out_fd;
int nlines;
int nbytes;
if (pipeDevCreate(pipe_name,128,256) == ERROR) { /* 128 lines of size 256 bytes */
perror("pipeDevCreate");
return -1;
}
if ((pipe_fd = open(pipe_name,O_RDWR,0666)) < 0) {
pipeDevDelete(pipe_name,TRUE);
perror("open");
return -1;
}
out_fd = ioTaskStdGet(0,STD_OUT);
ioTaskStdSet(0,STD_OUT,pipe_fd);
ipcom_run_cmd("ifconfig -a");
ioTaskStdSet(0,STD_OUT,out_fd);
if (ioctl(pipe_fd, FIONMSGS, &nlines) == OK && nlines > 0) {
char *pbuf = &pipe_buf[0];
int ln;
memset(pipe_buf,0,sizeof(pipe_buf));
for (ln=0; ln<nlines && ln<128; ln++) {
nbytes = read(pipe_fd,pbuf,256);
pbuf += nbytes;
}
}
close(pipe_fd);
pipeDevDelete(pipe_name,TRUE);
return 0;
}

How can I write code to receive whole string from a device on rs232?

I want to know how to write code which receives specific string.for example, this one OK , in this I only need "OK" string.
Another string is also like OK
I have written code in keil c51 for at89s52 microcontroller which works but I need more reliable code.
I'm using interrupt for rx data from rs232 serial.
void _esp8266_getch() interrupt 4 //UART Rx.{
if(TI){
TI=0;
xmit_bit=0;
return ;
}
else
{
count=0;
do
{
while(RI==0);
rx_buff=SBUF;
if(rx_buff==rx_data1) //rx_data1 = 0X0D /CR
{
RI=0;
while(RI==0);
rx_buff=SBUF;
if(rx_buff==rx_data2) // rx_data2 = 0x0A /LF
{
RI=0;
data_in_buffer=1;
if(loop_conti==1)
{
if(rec_bit_flag==1)
{
data_in_buffer=0;
loop_conti=0;
}
}
}
}
else
{
if(data_in_buffer==1)
{
received[count]=rx_buff; //my buffer in which storing string
rec_bit_flag=1;
count++;
loop_conti=1;
RI=0;
}
else
{
loop_conti=0;
rec_bit_flag=0;
RI=0;
}
}
}
while(loop_conti==1);
}
rx_buff=0;
}
This is one is just for reference, you need develop the logic further to your needs. Moreover, design is depends on what value is received, is there any specific pattern and many more parameter. And this is not a tested code, I tried to give my idea on design, with this disclaimer here is the sample..
//Assuming you get - "OK<CR><LF>" in which <CR><LF> indicates the end of string steam
int nCount =0;
int received[2][BUF_SIZE]; //used only 2 buffers, you can use more than 2, depending on how speed
//you receive and how fast you process it
int pingpong =0;
bool bRecFlag = FALSE;
int nNofbytes = 0;
void _esp8266_getch() interrupt 4 //UART Rx.
{
if(TI){
TI=0;
xmit_bit=0;
return ;
}
if(RI) // Rx interrupt
{
received[pingpong][nCount]=SBUF;
RI =0;
if(nCount > 0)
{
// check if you receive end of stream value
if(received[pingpong][nCount-1] == 0x0D) && (received[pingpong][nCount] == 0x0A))
{
bRecFlag = TRUE;
pingpong = (pingpong == 0);
nNofbytes = nCount;
nCount = 0;
return;
}
}
nCount++;
}
return;
}
int main()
{
// other stuff
while(1)
{
// other stuff
if(bRecFlag) //String is completely received
{
buftouse = pingpong ? 0 : 1; // when pingpong is 1, buff 0 will have last complete string
// when pingpong is 0, buff 1 will have last complete string
// copy to other buffer or do action on received[buftouse][]
bRecFlag = false;
}
// other stuff
}
}

Record rtsp stream with ffmpeg in iOS

I've followed iFrameExtractor to successfully stream rtsp in my swift project. In this project, it also has recording function. It basically use avformat_write_header
, av_interleaved_write_frame and av_write_trailer to save the rtsp source into mp4 file.
When I used this project in my device, the rtsp streaming works fine, but recording function will always generate a blank mp4 file with no image and sound.
Could anyone tell me what step that I miss?
I'm using iPhone5 with iOS 9.1 and XCode 7.1.1.
The ffmpeg is 2.8.3 version and followed the compile instruction by CompilationGuide – FFmpeg
Following is the sample code in this project
The function that generate every frame:
-(BOOL)stepFrame {
// AVPacket packet;
int frameFinished=0;
static bool bFirstIFrame=false;
static int64_t vPTS=0, vDTS=0, vAudioPTS=0, vAudioDTS=0;
while(!frameFinished && av_read_frame(pFormatCtx, &packet)>=0) {
// Is this a packet from the video stream?
if(packet.stream_index==videoStream) {
// 20130525 albert.liao modified start
// Initialize a new format context for writing file
if(veVideoRecordState!=eH264RecIdle)
{
switch(veVideoRecordState)
{
case eH264RecInit:
{
if ( !pFormatCtx_Record )
{
int bFlag = 0;
//NSString *videoPath = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/test.mp4"];
NSString *videoPath = #"/Users/liaokuohsun/iFrameTest.mp4";
const char *file = [videoPath UTF8String];
pFormatCtx_Record = avformat_alloc_context();
bFlag = h264_file_create(file, pFormatCtx_Record, pCodecCtx, pAudioCodecCtx,/*fps*/0.0, packet.data, packet.size );
if(bFlag==true)
{
veVideoRecordState = eH264RecActive;
fprintf(stderr, "h264_file_create success\n");
}
else
{
veVideoRecordState = eH264RecIdle;
fprintf(stderr, "h264_file_create error\n");
}
}
}
//break;
case eH264RecActive:
{
if((bFirstIFrame==false) &&(packet.flags&AV_PKT_FLAG_KEY)==AV_PKT_FLAG_KEY)
{
bFirstIFrame=true;
vPTS = packet.pts ;
vDTS = packet.dts ;
#if 0
NSRunLoop *pRunLoop = [NSRunLoop currentRunLoop];
[pRunLoop addTimer:RecordingTimer forMode:NSDefaultRunLoopMode];
#else
[NSTimer scheduledTimerWithTimeInterval:5.0//2.0
target:self
selector:#selector(StopRecording:)
userInfo:nil
repeats:NO];
#endif
}
// Record audio when 1st i-Frame is obtained
if(bFirstIFrame==true)
{
if ( pFormatCtx_Record )
{
#if PTS_DTS_IS_CORRECT==1
packet.pts = packet.pts - vPTS;
packet.dts = packet.dts - vDTS;
#endif
h264_file_write_frame( pFormatCtx_Record, packet.stream_index, packet.data, packet.size, packet.dts, packet.pts);
}
else
{
NSLog(#"pFormatCtx_Record no exist");
}
}
}
break;
case eH264RecClose:
{
if ( pFormatCtx_Record )
{
h264_file_close(pFormatCtx_Record);
#if 0
// 20130607 Test
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void)
{
ALAssetsLibrary *library = [[ALAssetsLibrary alloc]init];
NSString *filePathString = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/test.mp4"];
NSURL *filePathURL = [NSURL fileURLWithPath:filePathString isDirectory:NO];
if(1)// ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:filePathURL])
{
[library writeVideoAtPathToSavedPhotosAlbum:filePathURL completionBlock:^(NSURL *assetURL, NSError *error){
if (error) {
// TODO: error handling
NSLog(#"writeVideoAtPathToSavedPhotosAlbum error");
} else {
// TODO: success handling
NSLog(#"writeVideoAtPathToSavedPhotosAlbum success");
}
}];
}
[library release];
});
#endif
vPTS = 0;
vDTS = 0;
vAudioPTS = 0;
vAudioDTS = 0;
pFormatCtx_Record = NULL;
NSLog(#"h264_file_close() is finished");
}
else
{
NSLog(#"fc no exist");
}
bFirstIFrame = false;
veVideoRecordState = eH264RecIdle;
}
break;
default:
if ( pFormatCtx_Record )
{
h264_file_close(pFormatCtx_Record);
pFormatCtx_Record = NULL;
}
NSLog(#"[ERROR] unexpected veVideoRecordState!!");
veVideoRecordState = eH264RecIdle;
break;
}
}
// Decode video frame
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
}
else if(packet.stream_index==audioStream)
{
// 20131024 albert.liao modfied start
static int vPktCount=0;
BOOL bIsAACADTS = FALSE;
int ret = 0;
if(aPlayer.vAACType == eAAC_UNDEFINED)
{
tAACADTSHeaderInfo vxAACADTSHeaderInfo = {0};
bIsAACADTS = [AudioUtilities parseAACADTSHeader:(uint8_t *)packet.data ToHeader:&vxAACADTSHeaderInfo];
}
#synchronized(aPlayer)
{
if(aPlayer==nil)
{
aPlayer = [[AudioPlayer alloc]initAudio:nil withCodecCtx:(AVCodecContext *) pAudioCodecCtx];
NSLog(#"aPlayer initAudio");
if(bIsAACADTS)
{
aPlayer.vAACType = eAAC_ADTS;
//NSLog(#"is ADTS AAC");
}
}
else
{
if(vPktCount<5) // The voice is listened once image is rendered
{
vPktCount++;
}
else
{
if([aPlayer getStatus]!=eAudioRunning)
{
dispatch_async(dispatch_get_main_queue(), ^(void) {
#synchronized(aPlayer)
{
NSLog(#"aPlayer start play");
[aPlayer Play];
}
});
}
}
}
};
#synchronized(aPlayer)
{
int ret = 0;
ret = [aPlayer putAVPacket:&packet];
if(ret <= 0)
NSLog(#"Put Audio Packet Error!!");
}
// 20131024 albert.liao modfied end
if(bFirstIFrame==true)
{
switch(veVideoRecordState)
{
case eH264RecActive:
{
if ( pFormatCtx_Record )
{
h264_file_write_audio_frame(pFormatCtx_Record, pAudioCodecCtx, packet.stream_index, packet.data, packet.size, packet.dts, packet.pts);
}
else
{
NSLog(#"pFormatCtx_Record no exist");
}
}
}
}
}
else
{
//fprintf(stderr, "packet len=%d, Byte=%02X%02X%02X%02X%02X\n",\
packet.size, packet.data[0],packet.data[1],packet.data[2],packet.data[3], packet.data[4]);
}
// 20130525 albert.liao modified end
}
return frameFinished!=0;
}
avformat_write_header:
int h264_file_create(const char *pFilePath, AVFormatContext *fc, AVCodecContext *pCodecCtx,AVCodecContext *pAudioCodecCtx, double fps, void *p, int len )
{
int vRet=0;
AVOutputFormat *of=NULL;
AVStream *pst=NULL;
AVCodecContext *pcc=NULL, *pAudioOutputCodecContext=NULL;
avcodec_register_all();
av_register_all();
av_log_set_level(AV_LOG_VERBOSE);
if(!pFilePath)
{
fprintf(stderr, "FilePath no exist");
return -1;
}
if(!fc)
{
fprintf(stderr, "AVFormatContext no exist");
return -1;
}
fprintf(stderr, "file=%s\n",pFilePath);
// Create container
of = av_guess_format( 0, pFilePath, 0 );
fc->oformat = of;
strcpy( fc->filename, pFilePath );
// Add video stream
pst = avformat_new_stream( fc, 0 );
vVideoStreamIdx = pst->index;
fprintf(stderr,"Video Stream:%d",vVideoStreamIdx);
pcc = pst->codec;
avcodec_get_context_defaults3( pcc, AVMEDIA_TYPE_VIDEO );
// Save the stream as origin setting without convert
pcc->codec_type = pCodecCtx->codec_type;
pcc->codec_id = pCodecCtx->codec_id;
pcc->bit_rate = pCodecCtx->bit_rate;
pcc->width = pCodecCtx->width;
pcc->height = pCodecCtx->height;
if(fps==0)
{
double fps=0.0;
AVRational pTimeBase;
pTimeBase.num = pCodecCtx->time_base.num;
pTimeBase.den = pCodecCtx->time_base.den;
fps = 1.0/ av_q2d(pCodecCtx->time_base)/ FFMAX(pCodecCtx->ticks_per_frame, 1);
fprintf(stderr,"fps_method(tbc): 1/av_q2d()=%g",fps);
pcc->time_base.num = 1;
pcc->time_base.den = fps;
}
else
{
pcc->time_base.num = 1;
pcc->time_base.den = fps;
}
// reference ffmpeg\libavformat\utils.c
// For SPS and PPS in avcC container
pcc->extradata = malloc(sizeof(uint8_t)*pCodecCtx->extradata_size);
memcpy(pcc->extradata, pCodecCtx->extradata, pCodecCtx->extradata_size);
pcc->extradata_size = pCodecCtx->extradata_size;
// For Audio stream
if(pAudioCodecCtx)
{
AVCodec *pAudioCodec=NULL;
AVStream *pst2=NULL;
pAudioCodec = avcodec_find_encoder(AV_CODEC_ID_AAC);
// Add audio stream
pst2 = avformat_new_stream( fc, pAudioCodec );
vAudioStreamIdx = pst2->index;
pAudioOutputCodecContext = pst2->codec;
avcodec_get_context_defaults3( pAudioOutputCodecContext, pAudioCodec );
fprintf(stderr,"Audio Stream:%d",vAudioStreamIdx);
fprintf(stderr,"pAudioCodecCtx->bits_per_coded_sample=%d",pAudioCodecCtx->bits_per_coded_sample);
pAudioOutputCodecContext->codec_type = AVMEDIA_TYPE_AUDIO;
pAudioOutputCodecContext->codec_id = AV_CODEC_ID_AAC;
// Copy the codec attributes
pAudioOutputCodecContext->channels = pAudioCodecCtx->channels;
pAudioOutputCodecContext->channel_layout = pAudioCodecCtx->channel_layout;
pAudioOutputCodecContext->sample_rate = pAudioCodecCtx->sample_rate;
pAudioOutputCodecContext->bit_rate = 12000;//pAudioCodecCtx->sample_rate * pAudioCodecCtx->bits_per_coded_sample;
pAudioOutputCodecContext->bits_per_coded_sample = pAudioCodecCtx->bits_per_coded_sample;
pAudioOutputCodecContext->profile = pAudioCodecCtx->profile;
//FF_PROFILE_AAC_LOW;
// pAudioCodecCtx->bit_rate;
// AV_SAMPLE_FMT_U8P, AV_SAMPLE_FMT_S16P
//pAudioOutputCodecContext->sample_fmt = AV_SAMPLE_FMT_FLTP;//pAudioCodecCtx->sample_fmt;
pAudioOutputCodecContext->sample_fmt = pAudioCodecCtx->sample_fmt;
//pAudioOutputCodecContext->sample_fmt = AV_SAMPLE_FMT_U8;
pAudioOutputCodecContext->sample_aspect_ratio = pAudioCodecCtx->sample_aspect_ratio;
pAudioOutputCodecContext->time_base.num = pAudioCodecCtx->time_base.num;
pAudioOutputCodecContext->time_base.den = pAudioCodecCtx->time_base.den;
pAudioOutputCodecContext->ticks_per_frame = pAudioCodecCtx->ticks_per_frame;
pAudioOutputCodecContext->frame_size = 1024;
fprintf(stderr,"profile:%d, sample_rate:%d, channles:%d", pAudioOutputCodecContext->profile, pAudioOutputCodecContext->sample_rate, pAudioOutputCodecContext->channels);
AVDictionary *opts = NULL;
av_dict_set(&opts, "strict", "experimental", 0);
if (avcodec_open2(pAudioOutputCodecContext, pAudioCodec, &opts) < 0) {
fprintf(stderr, "\ncould not open codec\n");
}
av_dict_free(&opts);
#if 0
// For Audio, this part is no need
if(pAudioCodecCtx->extradata_size!=0)
{
NSLog(#"extradata_size !=0");
pAudioOutputCodecContext->extradata = malloc(sizeof(uint8_t)*pAudioCodecCtx->extradata_size);
memcpy(pAudioOutputCodecContext->extradata, pAudioCodecCtx->extradata, pAudioCodecCtx->extradata_size);
pAudioOutputCodecContext->extradata_size = pAudioCodecCtx->extradata_size;
}
else
{
// For WMA test only
pAudioOutputCodecContext->extradata_size = 0;
NSLog(#"extradata_size ==0");
}
#endif
}
if(fc->oformat->flags & AVFMT_GLOBALHEADER)
{
pcc->flags |= CODEC_FLAG_GLOBAL_HEADER;
pAudioOutputCodecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
if ( !( fc->oformat->flags & AVFMT_NOFILE ) )
{
vRet = avio_open( &fc->pb, fc->filename, AVIO_FLAG_WRITE );
if(vRet!=0)
{
fprintf(stderr,"avio_open(%s) error", fc->filename);
}
}
// dump format in console
av_dump_format(fc, 0, pFilePath, 1);
vRet = avformat_write_header( fc, NULL );
if(vRet==0)
return 1;
else
return 0;
}
av_interleaved_write_frame:
void h264_file_write_frame(AVFormatContext *fc, int vStreamIdx, const void* p, int len, int64_t dts, int64_t pts )
{
AVStream *pst = NULL;
AVPacket pkt;
if ( 0 > vVideoStreamIdx )
return;
// may be audio or video
pst = fc->streams[ vStreamIdx ];
// Init packet
av_init_packet( &pkt );
if(vStreamIdx ==vVideoStreamIdx)
{
pkt.flags |= ( 0 >= getVopType( p, len ) ) ? AV_PKT_FLAG_KEY : 0;
//pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index = pst->index;
pkt.data = (uint8_t*)p;
pkt.size = len;
pkt.dts = AV_NOPTS_VALUE;
pkt.pts = AV_NOPTS_VALUE;
// TODO: mark or unmark the log
//fprintf(stderr, "dts=%lld, pts=%lld\n",dts,pts);
// av_write_frame( fc, &pkt );
}
av_interleaved_write_frame( fc, &pkt );
}
av_write_trailer:
void h264_file_close(AVFormatContext *fc)
{
if ( !fc )
return;
av_write_trailer( fc );
if ( fc->oformat && !( fc->oformat->flags & AVFMT_NOFILE ) && fc->pb )
avio_close( fc->pb );
av_free( fc );
}
Thanks.
It looks like you're using the same AVFormatContext for both the input and output?
In the line
pst = fc->streams[ vStreamIdx ];
You're assigning the AVStream* from your AVFormatContext connected with your input (RTSP stream). But then later on you're trying to write the packet back to the same context av_interleaved_write_frame( fc, &pkt );. I kind of think of a context as a file which has helped me navagate this type of thing better. I do something identicial to what you're doing (not iOS though) where I use a separate AVFormatContext for each of the input (RTSP stream) and output (mp4 file). If I'm correct, I think what you just need to do is initialize an AVFormatContext and properly.
The following code (without error checking everything) is what I do to take an AVFormatContext * output_format_context = NULL and the AVFormatContext * input_format_context that I had associated with the RTSP stream and write from one to the other. This is after I have fetched a packet, etc., which in your case it looks like you're populating (I just take the packet from av_read_frame and re-package it.
This is code that could be in your write frame function (but it also does include the writing of the header).
AVFormatContext * output_format_context;
AVStream * in_stream_2;
AVStream * out_stream_2;
// Allocate the context with the output file
avformat_alloc_output_context2(&output_format_context, NULL, NULL, out_filename.c_str());
// Point to AVOutputFormat * output_format for manipulation
output_format = output_format_context->oformat;
// Loop through all streams
for (i = 0; i < input_format_context->nb_streams; i++) {
// Create a pointer to the input stream that was allocated earlier in the code
AVStream *in_stream = input_format_context->streams[i];
// Create a pointer to a new stream that will be part of the output
AVStream *out_stream = avformat_new_stream(output_format_context, in_stream->codec->codec);
// Set time_base of the new output stream to equal the input stream one since I'm not changing anything (can avoid but get a deprecation warning)
out_stream->time_base = in_stream->time_base;
// This is the non-deprecated way of copying all the parameters from the input stream into the output stream since everything stays the same
avcodec_parameters_from_context(out_stream->codecpar, in_stream->codec);
// I don't remember what this is for :)
out_stream->codec->codec_tag = 0;
// This just sets a flag from the format context to the stream relating to the header
if (output_format_context->oformat->flags & AVFMT_GLOBALHEADER)
out_stream->codec->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
}
// Check NOFILE flag and open the output file context (previously the output file was associated with the format context, now it is actually opened.
if (!(output_format->flags & AVFMT_NOFILE))
avio_open(&output_format_context->pb, out_filename.c_str(), AVIO_FLAG_WRITE);
// Write the header (not sure if this is always needed but h264 I believe it is.
avformat_write_header(output_format_context,NULL);
// Re-getting the appropriate stream that was populated above (this should allow for both audio/video)
in_stream_2 = input_format_context->streams[packet.stream_index];
out_stream_2 = output_format_context->streams[packet.stream_index];
// Rescaling pts and dts, duration and pos - you would do as you need in your code.
packet.pts = av_rescale_q_rnd(packet.pts, in_stream_2->time_base, out_stream_2->time_base, (AVRounding) (AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
packet.dts = av_rescale_q_rnd(packet.dts, in_stream_2->time_base, out_stream_2->time_base, (AVRounding) (AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
packet.duration = av_rescale_q(packet.duration, in_stream_2->time_base, out_stream_2->time_base);
packet.pos = -1;
// The first packet of my stream always gives me negative dts/pts so this just protects that first one for my purposes. You probably don't need.
if (packet.dts < 0) packet.dts = 0;
if (packet.pts < 0) packet.pts = 0;
// Finally write the frame
av_interleaved_write_frame(output_format_context, &packet);
// ....
// Write header, close/cleanup... etc
// ....
This code is fairly bare bones and doesn't include the setup (which it sounds like you're doing correctly anyway). I would also imagine this code could be cleaned up and tweaked for your purposes, but this works for me to re-write the RTSP stream into a file (in my case many files but code not shown).
The code is C code, so you might need to do minor tweaks for making it Swift compatible (for some of the library function calls maybe). I think overall it should be compatible though.
Hopefully this helps point you to the right direction. This was cobbled together thanks to several sample code sources (I don't remember where), along with warning prompts from the libraries themselves.

Fastcgi wrapper not working with root permisions

i have problem setting fastcgi wrapper to execute commands from web as root
only settings i have changed
FCGI_USER="root"
FCGI_GROUP="root"
[....] Starting FastCGI wrapper: fcgiwrapspawn-fcgi: I will not set uid to 0
failed!
i want to manipulate with gpio through web with wiringPi, but wiringPisetup requires root access
any ideas?
spawn-fcgi (https://github.com/lighttpd/spawn-fcgi) is the problem.
Apply this patch to src/spawn-fcgi.c and recompile spawn-fcgi. In my case I wanted to run nmap -sS which requires root. It really grinds my gears when developers decide to make running as root impossible because it's just a waste of everyone's time. Always add a flag, call it --super-insecure if you want to but at least put it there!
diff --git a/src/spawn-fcgi.c b/src/spawn-fcgi.c
index 2f320f7..27176d5 100644
--- a/src/spawn-fcgi.c
+++ b/src/spawn-fcgi.c
## -385,10 +385,10 ## static int find_user_group(const char *user, const char *group, uid_t *uid, gid_
}
my_uid = my_pwd->pw_uid;
- if (my_uid == 0) {
+ /*if (my_uid == 0) {
fprintf(stderr, "spawn-fcgi: I will not set uid to 0\n");
return -1;
- }
+ }*/
if (username) *username = user;
} else {
## -407,18 +407,18 ## static int find_user_group(const char *user, const char *group, uid_t *uid, gid_
}
my_gid = my_grp->gr_gid;
- if (my_gid == 0) {
+ /*if (my_gid == 0) {
fprintf(stderr, "spawn-fcgi: I will not set gid to 0\n");
return -1;
- }
+ }*/
}
} else if (my_pwd) {
my_gid = my_pwd->pw_gid;
- if (my_gid == 0) {
+ /*if (my_gid == 0) {
fprintf(stderr, "spawn-fcgi: I will not set gid to 0\n");
return -1;
- }
+ }*/
}
*uid = my_uid;

Why can't I get input from the ifstream?

I am trying to read in a text file for a maze program. The input is something like:
10 10
OO+E+OO+++
O++O+O+OOO
OOOOOO+O+O
+++++O++OO
OOO+OOO+O+
O+O+O+++O+
O+O+OOO+OO
++O+++O++O
O+OOOOO++O
O+O++O+OOO
When the user click on the open button, this opens a open file dialog box
{
openFileDialog1->InitialDirectory = "C:\Desktop;";
openFileDialog1->Filter = "Maze files (*.DAT)|*.DAT";
if (openFileDialog1->ShowDialog() == ::DialogResult::OK)
{
char filename[1024];
for (int i = 0; i < openFileDialog1->FileName->Length; i++)
{
filename[i] = openFileDialog1->FileName[i];
}
ifstream ifs;
ifs.open(filename); // NULL terminate this
maze = new Maze( panel1, ifs);
ifs.close();
}
}
the following is the maze constructor
Maze::Maze( Panel ^ drawingPanel, ifstream & ifs )
{
try
{
valid = false;
ifs >> width >> height;
int temp = width;
drawingPanel->Size.Width = width;
drawingPanel->Size.Height = height;
for (int i = 0; i < height; i++) // height is always nothing
for (int j = 0; j < width; j++)
{
if (orig[j][i] == DEADEND ||
orig[j][i] == OPEN ||
orig[j][i] == EXIT )
ifs >> orig[j][i]; // NULLS????
else
throw 'D'; // i had to throw something....so i threw the D /* make a slit class and throw the D there? slit.fill(D); */
}
// this should be last
panel = drawingPanel;
valid = true;
}
catch (...)
{
valid = false;
MessageBox::Show( "Not a proper maze file!" );
}
}
when the program runs: ifs >> width >> height width and height do not get set correctly.
I have searched this site for this problem and have not been able to find anything that has helped. Sorry for my inexperience, any help is greatly appreciated.
You'e program very ugly : don't know if you're programming in C or C++ or C++/CLI, or try to mix the 3...
Because you use Windows Form projet, i will give you a .Net solution for read a file, it's not the better solution but this does not mix things.
First for read the file, on a first window :
private: System::Void button1_Click(System::Object^ sender, System::EventArgs^ e)
{
openFileDialog1->Filter = "Maze Files (*.dat) | *.dat";
if (openFileDialog1->ShowDialog() == ::DialogResult::OK)
{
String ^fileName = openFileDialog1->FileName;
IO::StreamReader ^myMazeFile = gcnew IO::StreamReader(fileName);
String ^content = myMazeFile->ReadToEnd();
richTextBox1->Text = content;
myMazeFile->Close();
// display button for open second form wich draw maze
button2->Visible = true;
}
}
now we have our file content, so we pass it to a second form who will draw the maze :
private: System::Void button2_Click(System::Object^ sender, System::EventArgs^ e)
{
String ^content = richTextBox1->Text;
Maze ^frm = gcnew Maze(content);
frm->Show();
}
Second window, create overload constructor :
Maze(String ^contentMap)
{
InitializeComponent();
String ^dimension = getWords(contentMap, 2);
array<String ^> ^coordsString = dimension->Split(gcnew array<Char> {' '});
m_width = Convert::ToInt32(coordsString[0]);
m_height = Convert::ToInt32(coordsString[1]);
panel1->Width = m_width;
panel1->Height = m_height;
}
getWords method :
String ^getWords(String ^input, int numWords)
{
try
{
int words = numWords;
for (int i = 0; i < input->Length; ++i)
{
if (input[i] == ' ' ||input[i] == '\n')
words--;
if (words == 0)
{
return input->Substring(0, i);
}
}
}
catch (Exception ^ex)
{
// ...
}
return String::Empty;
}
You have your dimension in full .Net (private member m_width and m_height).