G-WAN - How to return Status Code: 200 OK if request URL = 541+ characters? - httprequest

I'm using G-WAN Web App. Server v7.12.6.
How to return valid Status Code: 200 OK if request URL has a total of 541+ characters including the 25 parameters?
ajaxGet(URL, method) where method is GET or PUT (same result)
Request URL:
http://myWebsite.ca:xx/?createCompany.c&legalname=mycomp&dba=mycomp%20dba&www=www.myWebsite...
createCompany.c
#pragma link "pq"
#include <stdlib.h>
#include <string.h>
#include "/usr/include/postgresql/libpq-fe.h"
#include "gwan.h"
//----------------------------------------------------------------------------
int main(int argc, char *argv[]) {
u64 start = getus();
PGconn *conn;
PGresult *res;
char DBrequestString [1000] = "";
char *legal_name = 0, *dba = 0, *www = 0; //...+ 22 more
xbuf_t *reply = get_reply(argv);
get_arg("legalname=", &legal_name, argc, argv);
get_arg("dba=", &dba, argc, argv);
get_arg("www=", &www, argc, argv);
//...+ 22 more
char requestString[1000] = "SELECT create_company('%s','%s','%s', ... + 22 more);";
sprintf(DBrequestString, requestString,legal_name, dba, www, ... + 22 more);
conn = PQconnectdb("host=x port=x dbname=x user=x password=x");
if (PQstatus(conn) != CONNECTION_OK){
fprintf(stderr, "Connection to database failed: %s",PQerrorMessage(conn));
PQfinish(conn);
xbuf_cat(reply, "{\"message\":\"Connection to database failed !\"}");//message Json format
return 200;
}
res = PQexec(conn, DBrequestString);
printf(" --> %s\n",PQgetvalue(res, 0, 0));
xbuf_cat(reply, PQgetvalue(res, 0, 0));//return one line Json format
PQclear(res);
PQfinish(conn);
printf("TEMPS D'EXECUTION: %.2Fms\n\n",(getus() - start)/1000.0);
return 200; //return OK
}

Good question - I don't have access to the source code righ tnow but looking at the G-WAN API on the website made me think that either READ_XBUF or MAX_ENTITY_SIZE might help.
Typically, READ_XBUF would be used in a G-WAN HANDLER to augment (if needed) the connection buffer while MAX_ENTITY_SIZE is a one-time setting that can be changed at any time (and even before the server starts, thanks to the init.c script).
I think that just enlarging the MAX_ENTITY_SIZE value (which purpose is to prevent lage-entity DoS attacks) would do the job because it is most likely that G-WAN automatically enlarges the READ_XBUF on a per request basis when reading from the client.

Thank you for your quick response.
I updated the script createCompany.c to raise the limit (as per your example):
u32 *old_entity_size = (u32*)get_env(argv, MAX_ENTITY_SIZE);
u32 new_entity_size = 2 * 1024 * 1024; // 2 MiB
*old_entity_size = new_entity_size; // raise the limit to 2 MiB
It's that possible the MAX_ENTITY_SIZE it's exclusively for POST request?
Raising the MAX_ENTITY_SIZE limit will work for GET/PUT requests also?
Actualy, I would like the limit to remain as initialy setup by default by G-WAN, useful for the other .c scripts,
but to raise the limit for GET/PUT requests only for this particular script createCompany.c
Any script.c example how to raise the limit for READ_XBUF?

Related

How to put BG96 on power save mode between sending messages to Azure IoT Hub over HTTP

I'm using a Nucleo L496ZG, X-NUCLEO-IKS01A2 and the Quectel BG96 module to send sensor data (temperature, humidity etc..) to Azure IoT Central over HTTP.
I've been using the example implementation provided by Avnet here, which works fine but it's not power optimized and with a 6700mAh battery pack it only lasts around 30 hours sending telemetry ever ~10 seconds. Goal is for it to last around a week. I'm open to increasing the time between messages but I also want to save power in between sending.
I've gone over the Quectel BG96 manuals and I've tried two things:
1) powering off the device by driving the PWRKEY and turning it back on when I need to send a message
I've gotten this to work, kinda… until I get a hardfault exception which happens seemingly randomly anywhere from within ~5 minutes of running to 2 hours (messages successfully sending prior to the exception). Output of crash log parser is the same every time:
Crash location = strncmp [0x08038DF8] (based on PC value)
Caller location = _findenv_r [0x0804119D] (based on LR value)
Stack Pointer at the time of crash = [20008128]
Target and Fault Info:
Processor Arch: ARM-V7M or above
Processor Variant: C24
Forced exception, a fault with configurable priority has been escalated to HardFault
A precise data access error has occurred. Faulting address: 03060B30
The caller location traces back to my .map file and I don't know what to make of it.
My code:
// Copyright (c) Microsoft. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
//#define USE_MQTT
#include <stdlib.h>
#include "mbed.h"
#include "iothubtransporthttp.h"
#include "iothub_client_core_common.h"
#include "iothub_client_ll.h"
#include "azure_c_shared_utility/platform.h"
#include "azure_c_shared_utility/agenttime.h"
#include "jsondecoder.h"
#include "bg96gps.hpp"
#include "azure_message_helper.h"
#define IOT_AGENT_OK CODEFIRST_OK
#include "azure_certs.h"
/* initialize the expansion board && sensors */
#include "XNucleoIKS01A2.h"
static HTS221Sensor *hum_temp;
static LSM6DSLSensor *acc_gyro;
static LPS22HBSensor *pressure;
static const char* connectionString = "xxx";
// to report F uncomment this #define CTOF(x) (((double)(x)*9/5)+32)
#define CTOF(x) (x)
Thread azure_client_thread(osPriorityNormal, 10*1024, NULL, "azure_client_thread");
static void azure_task(void);
EventFlags deleteOK;
size_t g_message_count_send_confirmations;
/* create the GPS elements for example program */
BG96Interface* bg96Interface;
//static int tilt_event;
// void mems_int1(void)
// {
// tilt_event++;
// }
void mems_init(void)
{
//acc_gyro->attach_int1_irq(&mems_int1); // Attach callback to LSM6DSL INT1
hum_temp->enable(); // Enable HTS221 enviromental sensor
pressure->enable(); // Enable barametric pressure sensor
acc_gyro->enable_x(); // Enable LSM6DSL accelerometer
//acc_gyro->enable_tilt_detection(); // Enable Tilt Detection
}
void powerUp(void) {
if (platform_init() != 0) {
printf("Error initializing the platform\r\n");
return;
}
bg96Interface = (BG96Interface*) easy_get_netif(true);
}
void BG96_Modem_PowerOFF(void)
{
DigitalOut BG96_RESET(D7);
DigitalOut BG96_PWRKEY(D10);
DigitalOut BG97_WAKE(D11);
BG96_RESET = 0;
BG96_PWRKEY = 0;
BG97_WAKE = 0;
wait_ms(300);
}
void powerDown(){
platform_deinit();
BG96_Modem_PowerOFF();
}
//
// The main routine simply prints a banner, initializes the system
// starts the worker threads and waits for a termination (join)
int main(void)
{
//printStartMessage();
XNucleoIKS01A2 *mems_expansion_board = XNucleoIKS01A2::instance(I2C_SDA, I2C_SCL, D4, D5);
hum_temp = mems_expansion_board->ht_sensor;
acc_gyro = mems_expansion_board->acc_gyro;
pressure = mems_expansion_board->pt_sensor;
azure_client_thread.start(azure_task);
azure_client_thread.join();
platform_deinit();
printf(" - - - - - - - ALL DONE - - - - - - - \n");
return 0;
}
static void send_confirm_callback(IOTHUB_CLIENT_CONFIRMATION_RESULT result, void* userContextCallback)
{
//userContextCallback;
// When a message is sent this callback will get envoked
g_message_count_send_confirmations++;
deleteOK.set(0x1);
}
void sendMessage(IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle, char* buffer, size_t size)
{
IOTHUB_MESSAGE_HANDLE messageHandle = IoTHubMessage_CreateFromByteArray((const unsigned char*)buffer, size);
if (messageHandle == NULL) {
printf("unable to create a new IoTHubMessage\r\n");
return;
}
if (IoTHubClient_LL_SendEventAsync(iotHubClientHandle, messageHandle, send_confirm_callback, NULL) != IOTHUB_CLIENT_OK)
printf("FAILED to send! [RSSI=%d]\n", platform_RSSI());
else
printf("OK. [RSSI=%d]\n",platform_RSSI());
IoTHubMessage_Destroy(messageHandle);
}
void azure_task(void)
{
//bool tilt_detection_enabled=true;
float gtemp, ghumid, gpress;
int k;
int msg_sent=1;
while (true) {
powerUp();
mems_init();
/* Setup IoTHub client configuration */
IOTHUB_CLIENT_LL_HANDLE iotHubClientHandle = IoTHubClient_LL_CreateFromConnectionString(connectionString, HTTP_Protocol);
if (iotHubClientHandle == NULL) {
printf("Failed on IoTHubClient_Create\r\n");
return;
}
// add the certificate information
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "TrustedCerts", certificates) != IOTHUB_CLIENT_OK)
printf("failure to set option \"TrustedCerts\"\r\n");
#if MBED_CONF_APP_TELUSKIT == 1
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "product_info", "TELUSIOTKIT") != IOTHUB_CLIENT_OK)
printf("failure to set option \"product_info\"\r\n");
#endif
// polls will happen effectively at ~10 seconds. The default value of minimumPollingTime is 25 minutes.
// For more information, see:
// https://azure.microsoft.com/documentation/articles/iot-hub-devguide/#messaging
unsigned int minimumPollingTime = 9;
if (IoTHubClient_LL_SetOption(iotHubClientHandle, "MinimumPollingTime", &minimumPollingTime) != IOTHUB_CLIENT_OK)
printf("failure to set option \"MinimumPollingTime\"\r\n");
IoTDevice* iotDev = (IoTDevice*)malloc(sizeof(IoTDevice));
if (iotDev == NULL) {
return;
}
setUpIotStruct(iotDev);
char* msg;
size_t msgSize;
hum_temp->get_temperature(&gtemp); // get Temp
hum_temp->get_humidity(&ghumid); // get Humidity
pressure->get_pressure(&gpress); // get pressure
iotDev->Temperature = CTOF(gtemp);
iotDev->Humidity = (int)ghumid;
iotDev->Pressure = (int)gpress;
printf("(%04d)",msg_sent++);
msg = makeMessage(iotDev);
msgSize = strlen(msg);
sendMessage(iotHubClientHandle, msg, msgSize);
free(msg);
iotDev->Tilt &= 0x2;
/* schedule IoTHubClient to send events/receive commands */
IOTHUB_CLIENT_STATUS status;
while ((IoTHubClient_LL_GetSendStatus(iotHubClientHandle, &status) == IOTHUB_CLIENT_OK) && (status == IOTHUB_CLIENT_SEND_STATUS_BUSY))
{
IoTHubClient_LL_DoWork(iotHubClientHandle);
ThisThread::sleep_for(100);
}
deleteOK.wait_all(0x1);
free(iotDev);
IoTHubClient_LL_Destroy(iotHubClientHandle);
powerDown();
ThisThread::sleep_for(300000);
}
return;
}
I know PSM is probably the way to go since powering on/off the device draws a lot of power but it would be useful if someone had an idea of what is happening here.
2) putting the device to PSM between sending messages
The BG96 library in the example code I'm using doesn't have a method to turn on PSM so I tried to implement my own. When I tried to run it, it basically runs into an exception right away so I know it's wrong (I'm very new to embedded development and have no prior experience with AT commands).
/** ----------------------------------------------------------
* this is a method provided by current library
* #brief Tx a string to the BG96 and wait for an OK response
* #param none
* #retval true if OK received, false otherwise
*/
bool BG96::tx2bg96(char* cmd) {
bool ok=false;
_bg96_mutex.lock();
ok=_parser.send(cmd) && _parser.recv("OK");
_bg96_mutex.unlock();
return ok;
}
/**
* method I created in an attempt to use PSM
*/
bool BG96::psm(void) {
return tx2bg96((char*)"AT+CPSMS=1,,,”00000100”,”00000001”");
}
Can someone tell me what I'm doing wrong and provide any guidance on how I can achieve my goal of having my device run on battery for longer?
Thank you!!
I got Power Saving Mode working by using Mbed's ATCmdParser and the AT+QPSMS commands as per Quectel's docs. The modem doesn't always go into power saving mode right away so that should be noted. I also found that I have to restart the modem afterwards or else I get weird behaviour. My code looks something like this:
bool BG96::psm(char* T3412, char* T3324) {
_bg96_mutex.lock();
if(_parser.send("AT+QPSMS=1,,,\"%s\",\"%s\"", T3412, T3324) && _parser.recv("OK")) {
_bg96_mutex.unlock();
}else {
_bg96_mutex.unlock();
return false;
}
return BG96Ready(); }//restarts modem
To send a message to Azure, the modem will need to be manually woken up by driving the PWRKEY to start bi-directional communication, and a new client handle needs to be created and torn down every time since Azure connection uses keepAlive and the modem will be unreachable when it's in PSM.

Webm (VP8 / Opus) file read and write back

I am trying to develop a webrtc simulator in C/C++. For media handling, I plan to use libav. I am thinking of below steps to realize media exchange between two webrtc simulator. Say I have two webrtc simulators A and B.
Read media at A from a input webm file using av_read_frame api.
I assume I will get the encoded media (audio / video) data, am I correct here?
Send the encoded media data to simulator B over a UDP socket.
Simulator B receives the media data in UDP socket as RTP packets.
Simulator B extracts audio/video data from just received RTP packet.
I assume the extracted media data at simulator B are the encoded data only (am I correct here). I do not want to decode it. I want to write it to a file. Later I will play the file to check if I have done everything right.
To simplify this problem lets take out UDP socket part. Then my question reduces to read data from a webm input file, get the encoded media, prepare the packet and write to a output file using av_interleaved_write_frame or any other appropriate api. All these things I want to do using libav.
Is there any example code I can refer.
Or can somebody please guide me to develop it.
I am trying with a test program. As a first step, my aim is to read from a file and write to an output file. I have below code, but it is not working properly.
//#define _AUDIO_WRITE_ENABLED_
#include "libavutil/imgutils.h"
#include "libavutil/samplefmt.h"
#include "libavformat/avformat.h"
static AVPacket pkt;
static AVFormatContext *fmt_ctx = NULL;
static AVFormatContext *av_format_context = NULL;
static AVOutputFormat *av_output_format = NULL;
static AVCodec *video_codec = NULL;
static AVStream *video_stream = NULL;
static AVCodec *audio_codec = NULL;
static AVStream *audio_stream = NULL;
static const char *src_filename = NULL;
static const char *dst_filename = NULL;
int main (int argc, char **argv)
{
int ret = 0;
int index = 0;
if (argc != 3)
{
printf("Usage: ./webm input_video_file output_video_file \n");
exit(0);
}
src_filename = argv[1];
dst_filename = argv[2];
printf("Source file = %s , Destination file = %s\n", src_filename, dst_filename);
av_register_all();
/* open input file, and allocate format context */
if (avformat_open_input(&fmt_ctx, src_filename, NULL, NULL) < 0)
{
fprintf(stderr, "Could not open source file %s\n", src_filename);
exit(1);
}
/* retrieve stream information */
if (avformat_find_stream_info(fmt_ctx, NULL) < 0)
{
fprintf(stderr, "Could not find stream information\n");
exit(2);
}
av_output_format = av_guess_format(NULL, dst_filename, NULL);
if(!av_output_format)
{
fprintf(stderr, "Could not guess output file format\n");
exit(3);
}
av_output_format->audio_codec = AV_CODEC_ID_VORBIS;
av_output_format->video_codec = AV_CODEC_ID_VP8;
av_format_context = avformat_alloc_context();
if(!av_format_context)
{
fprintf(stderr, "Could not allocation av format context\n");
exit(4);
}
av_format_context->oformat = av_output_format;
strcpy(av_format_context->filename, dst_filename);
video_codec = avcodec_find_encoder(av_output_format->video_codec);
if (!video_codec)
{
fprintf(stderr, "Codec not found\n");
exit(5);
}
video_stream = avformat_new_stream(av_format_context, video_codec);
if (!video_stream)
{
fprintf(stderr, "Could not alloc stream\n");
exit(6);
}
avcodec_get_context_defaults3(video_stream->codec, video_codec);
video_stream->codec->codec_id = AV_CODEC_ID_VP8;
video_stream->codec->codec_type = AVMEDIA_TYPE_VIDEO;
video_stream->time_base = (AVRational) {1, 30};
video_stream->codec->width = 640;
video_stream->codec->height = 480;
video_stream->codec->pix_fmt = PIX_FMT_YUV420P;
video_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
video_stream->codec->bit_rate = 400000;
video_stream->codec->gop_size = 10;
video_stream->codec->max_b_frames=1;
#ifdef _AUDIO_WRITE_ENABLED_
audio_codec = avcodec_find_encoder(av_output_format->audio_codec);
if (!audio_codec)
{
fprintf(stderr, "Codec not found audio codec\n");
exit(5);
}
audio_stream = avformat_new_stream(av_format_context, audio_codec);
if (!audio_stream)
{
fprintf(stderr, "Could not alloc stream for audio\n");
exit(6);
}
avcodec_get_context_defaults3(audio_stream->codec, audio_codec);
audio_stream->codec->codec_id = AV_CODEC_ID_VORBIS;
audio_stream->codec->codec_type = AVMEDIA_TYPE_AUDIO;
audio_stream->time_base = (AVRational) {1, 30};
audio_stream->codec->sample_rate = 8000;
audio_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
#endif
if(!(av_output_format->flags & AVFMT_NOFILE))
{
if (avio_open(&av_format_context->pb, dst_filename, AVIO_FLAG_WRITE) < 0)
{
fprintf(stderr, "Could not open '%s'\n", dst_filename);
}
}
/* Before avformat_write_header set the stream */
avformat_write_header(av_format_context, NULL);
/* initialize packet, set data to NULL, let the demuxer fill it */
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
pkt.stream_index = video_stream->index;
ret = av_read_frame(fmt_ctx, &pkt);
while (ret >= 0)
{
index++;
//pkt.stream_index = video_avstream->index;
if(pkt.stream_index == video_stream->index)
{
printf("Video: Read cycle %d, bytes read = %d, pkt stream index=%d\n", index, pkt.size, pkt.stream_index);
av_write_frame(av_format_context, &pkt);
}
#ifdef _AUDIO_WRITE_ENABLED_
else if(pkt.stream_index == audio_stream->index)
{
printf("Audio: Read cycle %d, bytes read = %d, pkt stream index=%d\n", index, pkt.size, pkt.stream_index);
av_write_frame(av_format_context, &pkt);
}
#endif
av_free_packet(&pkt);
ret = av_read_frame(fmt_ctx, &pkt);
}
av_write_trailer(av_format_context);
/** Exit procedure starts */
avformat_close_input(&fmt_ctx);
avformat_free_context(av_format_context);
return 0;
}
When I execute this program, it outputs "codec not found". Now sure whats going wrong, Can somebody help please.
Codec not found issue is resolved by separately building libvpx1.4 version. Still struggling to read from source file, and writing to a destination file.
EDIT 1: After code modification, only video stuff I am able to write to a file, though some more errors are still present.
EDIT 2: With modified code (2nd round), I see video frames are written properly. For audio frames I added the code under a macro _AUDIO_WRITE_ENABLED_ , but if I enable this macro program crashing. Can somebody guide whats wrong in audio write part (code under macro _AUDIO_WRITE_ENABLED_).
I am not fully answering your question, but I hope we will get to the final solution eventually. When I tried to run your code, I got this error "time base not set".
Time base and other header specs are part of codec. This is, how I have this thing specified for writing into file (vStream is of AVStream):
#if LIBAVCODEC_VER_AT_LEAST(53, 21)
avcodec_get_context_defaults3(rc->vStream->codec, AVMEDIA_TYPE_VIDEO);
#else
avcodec_get_context_defaults2(rc->vStream->codec, AVMEDIA_TYPE_VIDEO);
#endif
#if LIBAVCODEC_VER_AT_LEAST(54, 25)
vStream->codec->codec_id = AV_CODEC_ID_VP8;
#else
vStream->codec->codec_id = CODEC_ID_VP8;
#endif
vStream->codec->codec_type = AVMEDIA_TYPE_VIDEO;
vStream->codec->time_base = (AVRational) {1, 30};
vStream->codec->width = 640;
vStream->codec->height = 480;
vStream->codec->pix_fmt = PIX_FMT_YUV420P;
EDIT: I ran your program in Valgrind and it segfaults on av_write_frame. Looks like its time_base and other specs for output are not set properly.
Add the specs before avformat_write_header(), before it is too late.

transform javascript to opcode using spidermonkey

i am new to spider monkey and want to use it for transform java script file to sequence of byte code.
i get spider monkey and build it in debug mode.
i want to use JS_CompileScript function in jsapi.h to compile javascript code and analysis this to get bytecode , but when in compile below code and run it , i get run time error.
the error is "Unhandled exception at 0x0f55c020 (mozjs185-1.0.dll) in spiderMonkeyTest.exe: 0xC0000005: Access violation reading location 0x00000d4c." and i do not resolve it.
any body can help me to resolve this or introducing other solutions to get byte code from javascript code by using spider monkey ?
// spiderMonkeyTest.cpp : Defines the entry point for the console application.
//
#define XP_WIN
#include <iostream>
#include <fstream>
#include "stdafx.h"
#include "jsapi.h"
#include "jsanalyze.h"
using namespace std;
using namespace js;
static JSClass global_class = { "global",
JSCLASS_NEW_RESOLVE | JSCLASS_GLOBAL_FLAGS,
JS_PropertyStub,
NULL,
JS_PropertyStub,
JS_StrictPropertyStub,
JS_EnumerateStub,
JS_ResolveStub,
JS_ConvertStub,
NULL,
JSCLASS_NO_OPTIONAL_MEMBERS
};
int _tmain(int argc, _TCHAR* argv[]) {
/* Create a JS runtime. */
JSRuntime *rt = JS_NewRuntime(16L * 1024L * 1024L);
if (rt == NULL)
return 1;
/* Create a context. */
JSContext *cx = JS_NewContext(rt, 8192);
if (cx == NULL)
return 1;
JS_SetOptions(cx, JSOPTION_VAROBJFIX);
JSScript *script;
JSObject *obj;
const char *js = "function a() { var tmp; tmp = 1 + 2; temp = temp * 2; alert(tmp); return 1; }";
obj = JS_CompileScript(cx,JS_GetGlobalObject(cx),js,strlen(js),"code.js",NULL);
script = obj->getScript();
if (script == NULL)
return JS_FALSE; /* compilation error */
js::analyze::Script *sc = new js::analyze::Script();
sc->analyze(cx,script);
JS_DestroyContext(cx);
JS_DestroyRuntime(rt);
/* Shut down the JS engine. */
JS_ShutDown();
return 1;
}
Which version of Spidermonkey are you using? I am using the one that comes with FireFox 10 so the API may be different.
You should create a new global object and initialize it by calling JS_NewCompartmentAndGlobalObject() and JS_InitStandardClasses() before compiling your script :
.....
/*
* Create the global object in a new compartment.
* You always need a global object per context.
*/
global = JS_NewCompartmentAndGlobalObject(cx, &global_class, NULL);
if (global == NULL)
return 1;
/*
* Populate the global object with the standard JavaScript
* function and object classes, such as Object, Array, Date.
*/
if (!JS_InitStandardClasses(cx, global))
return 1;
......
Note, the function JS_NewCompartmentAndGlobalObject() is obsolete now, check the latest JSAPI documentation for the version your are using. Your JS_CompileScript() call just try to retrieve a global object which has not been created and probably this causes the exception.
how about using function "SaveCompiled" ? it will save object/op-code (compiled javascript) to file

WinXP: sendto() failed with 10014 (WSAEFAULT) if destination address is const-qualified, IPv4-specific

It seems, I found a bug in Windows...
Ok, let not be such pathetic one. I'm trying to do generic sendto() operation for UDP and occasionaly found that WinXP (32 bit, SP3, checked on real and virtual machines) returns "-1" bytes sent with WSAGetLastError() as error 10014 (aka WSAEFAULT). Occurs only on IPv4 addresses (same code with IPv6 destination works perfectly). Major condition to reproduce is usage of "const struct sockaddr_in" declared at global scope. Here is the plain C code for VS2010 (also I've tried with Eclipse+MinGW, got same results):
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <winsock2.h>
#include <stdint.h>
#pragma comment(lib, "Ws2_32.lib")
#define INADDR_UPNP_V4 0xEFFFFFFA
#define htons(x) ((((uint16_t)(x) & 0xFF00) >> 8) | (((uint16_t)(x) & 0x00FF) << 8))
#define htonl(x) ((((uint32_t)(x) & 0xFF000000) >> 24) | (((uint32_t)(x) & 0x00FF0000) >> 8) | (((uint32_t)(x) & 0x0000FF00) << 8) | (((uint32_t)(x) & 0x000000FF) << 24))
// Magic "const" qualifier, causes run-time error
const struct sockaddr_in addr_global = {
AF_INET,
htons(1900),
{
htonl(INADDR_UPNP_V4)
},
{0},
};
int main(int argc, char** argv)
{
#define CR_LF "\r\n"
// these two lines to un-buffer console window output at Win32, see URL below for details
// http://wiki.eclipse.org/CDT/User/FAQ#Eclipse_console_does_not_show_output_on_Windows
setvbuf(stdout, NULL, _IONBF, 0);
setvbuf(stderr, NULL, _IONBF, 0);
printf("Started\n");
const struct sockaddr_in addr_local = {
AF_INET,
htons(1900),
{
htonl(INADDR_UPNP_V4)
},
{0},
};
const char *MSEARCH_REQUEST_V4 = "M-SEARCH * HTTP/1.1"CR_LF
"Host:239.255.255.250:1900"CR_LF
"MAN:\"ssdp:discover\""CR_LF
"ST:ssdp:all"CR_LF
"MX:3"CR_LF
CR_LF;
const int MSEARCH_LEN = strlen(MSEARCH_REQUEST_V4);
WSADATA wsaData;
int res = WSAStartup(MAKEWORD(2, 2), &wsaData);
int af = AF_INET;
int sock_id = socket(af, SOCK_DGRAM, IPPROTO_UDP);
if (-1 == sock_id) {
printf("%s: socket() failed with error %i/%i\n", __FUNCTION__,
errno, WSAGetLastError());
return 1;
}
int data_sent = 0;
printf("1st sendto()\n");
data_sent = sendto(sock_id, MSEARCH_REQUEST_V4,
MSEARCH_LEN, 0,
(const struct sockaddr * const)&addr_local,
sizeof(struct sockaddr_in));
if (data_sent < 0) {
printf("%s: sendto(local) failed with error %i/%i\n", __FUNCTION__,
errno, WSAGetLastError());
}
printf("2nd sendto(), will fail on WinXP SP3 (32 bit)\n");
data_sent = sendto(sock_id, MSEARCH_REQUEST_V4,
MSEARCH_LEN, 0,
(const struct sockaddr * const)&addr_global,
sizeof(struct sockaddr_in));
if (data_sent < 0) {
printf("%s: sendto(global) failed with error %i/%i\n", __FUNCTION__,
errno, WSAGetLastError());
}
closesocket(sock_id);
res = WSACleanup();
printf("Finished\n");
return 0;
}
So, if you run this code at Win7, for example, it will be absolutely OK. But WinXP fails on addr_global usage if it equipped with "const" qualifier (see "Magic" comment above). Also, "Output" window says:
First-chance exception at 0x71a912f4 in SendtoBugXP.exe: 0xC0000005:
Access violation writing location 0x00415744.
With help of "Autos" window, it's easy to figure out that 0x00415744 location is address of addr_global.sin_zero field. It seems, WinXP to write zeros there and violates memory access flags. Or this is just silly me, trying to go wrong door?
Appreciate your comments a lot. Thanks in advance.
Yeah you found a bug. sendto() has that argument declared const, but wrote to it anyway. Good luck getting it fixed though. Hint: it might be in your antivirus or firewall.
To summarize results from other forums: yes, this is Windows bug, existing up to WinXP in "desktop" and Win2003 at "server" segments.
WinSock code does attempt to force-fill "sin_zero" field with zeros. And "const" global scope causes memory access violation. Stack trace is about like that:
Thread [1] 0 (Suspended : Signal : SIGSEGV:Segmentation fault)
WSHTCPIP!WSHGetSockaddrType() at 0x71a912f4
0x71a52f9f
WSAConnect() at 0x71ab2fd7
main() at tests_main.c:77 0x401584
The same behavior observed on bind() by other people.

GNU Radio File Format for the recorded samples

Do you know the format in which GNU Radio ( File Sink in GNU Radio Companion) stores the samples in the Binary File?
I need to read these samples in Matlab, but the problem is the file is too big to be read in Matlab.
I am writing the program in C++ to read this binary file.
The file sink is just a dump of the data stream. If the data stream content was simple bytes then the content of the file is straightforward. If the data stream contained complex numbers then the file will contain a list of complex numbers where each complex number is given by two floats and each float by (usually) 4 bytes.
See the files gnuradio/gnuradio-core/src/lib/io/gr_file_sink.cc and gr_file_source.cc for the implementations of the gnuradio file reading and writing blocks.
You could also use python and gnuradio to convert the files into some other format.
from gnuradio import gr
# Assuming the data stream was complex numbers.
src = gr.file_source(gr.sizeof_gr_complex, "the_file_name")
snk = gr.vector_sink_c()
tb = gr.top_block()
tb.connect(src, snk)
tb.run()
# The complex numbers are then accessible as a python list.
data = snk.data()
Ben's answer still stands – but it's from a time long past (the module organization points at GNU Radio 3.6, I think). Organizationally, things are different now; data-wise, the File Sink remained the same.
GNU Radio now has relatively much block documentation in their wiki. In particular, the File Sink documentation page has a section on Handling File Sink data; not to overquote that:
// This is C++17
#include <algorithm>
#include <cmath>
#include <complex>
#include <cstddef>
#include <filesystem>
#include <fstream>
#include <string_view>
#include <vector>
#include <fmt/format.h>
#include <fmt/ranges.h>
using sample_t = std::complex<float>;
using power_t = float;
constexpr std::size_t read_block_size = 1 << 16;
int main(int argc, char *argv[]) {
// expect exactly one argument, a file name
if (argc != 2) {
fmt::print(stderr, "Usage: {} FILE_NAME", argv[0]);
return -1;
}
// just for convenience; we could as well just use `argv[1]` throughout the
// code
std::string_view filename(argv[1]);
// check whether file exists
if (!std::filesystem::exists(filename.data())) {
fmt::print(stderr, "file '{:s}' not found\n", filename);
return -2;
}
// calculate how many samples to read
auto file_size = std::filesystem::file_size(std::filesystem::path(filename));
auto samples_to_read = file_size / sizeof(sample_t);
// construct and reserve container for resulting powers
std::vector<power_t> powers;
powers.reserve(samples_to_read);
std::ifstream input_file(filename.data(), std::ios_base::binary);
if (!input_file) {
fmt::print(stderr, "error opening '{:s}'\n", filename);
return -3;
}
// construct and reserve container for read samples
// if read_block_size == 0, then read the whole file at once
std::vector<sample_t> samples;
if (read_block_size)
samples.resize(read_block_size);
else
samples.resize(samples_to_read);
fmt::print(stderr, "Reading {:d} samples…\n", samples_to_read);
while (samples_to_read) {
auto read_now = std::min(samples_to_read, samples.size());
input_file.read(reinterpret_cast<char *>(samples.data()),
read_now * sizeof(sample_t));
for (size_t idx = 0; idx < read_now; ++idx) {
auto magnitude = std::abs(samples[idx]);
powers.push_back(magnitude * magnitude);
}
samples_to_read -= read_now;
}
// we're not actually doing anything with the data. Let's print it!
fmt::print("Power\n{}\n", fmt::join(powers, "\n"));
}