How to control boost::asio initialisation on Windows? - boost-asio

I'm developing a program for a system running Windows 7 Embedded.
The program uses boost::asio sockets to communicate on both UDP and TCP sockets (it acts as a DHCP server and it's controlled by a RESTful interface).
Normally it works fine. However, occasionally it doesn't initialise correctly and won't respond to any DHCP or HTTP messages. I suspect that this is because the program has started before the underlying Winsock is ready. I naively attempted to wait for the Winsock to initialise prior to creating the boost::asio::io_service and sockets using the code below:
WSADATA wsaData;
while (WSAStartup(MAKEWORD(2,2), &wsaData))
{
BOOST_LOG_TRIVIAL(error) << "WSAStartup failed, waiting...";
boost::this_thread::sleep_for(COMMS_DELAY_PERIOD);
}
But I now realise that boost::asio is initialised before main is called on a Windows system. See the code below from winsock_init.hpp:
// Static variable to ensure that winsock is initialised before main, and
// therefore before any other threads can get started.
static const winsock_init<>& winsock_init_instance = winsock_init<>(false);
Is there a way to ensure that asio is correctly initialised in a Windows system before using it, without editing the DLL?

In now appreciate that the comments about DLLs in winsock_init.hpp is just an example, the asio library isn't built in to a DLL.
Also the example code only applies to MSVC (not surprising since it's Windows specific). However, on our project we build with both MSVC and MinGw (gcc) under Qt. The following code worked for us:
#include <boost/asio/detail/winsock_init.hpp>
#ifdef _MSC_VER
#pragma warning(push)
#pragma warning(disable:4073)
#pragma init_seg(lib)
boost::asio::detail::winsock_init<>::manual manual_winsock_init;
#pragma warning(pop)
#else // using MinGw (gcc)
boost::asio::detail::winsock_init<>::manual manual_winsock_init
__attribute__ ((init_priority (101)));
#endif

Related

X_NUCLEO_IHM03A1 in Mbed Studio?

I want to use Mbed Studio for writing program for X_NUCLEO_IHM03A1 with NUCLEO-L476RG board using official library and example for 1 motor. Library as I understood supports only mbed os 2. In the same time Mbed Studio can work only with mbed os 5.
After compiling project my device rebooting with following message:
++ MbedOS Error Info ++
Error Status: 0x80010133 Code: 307 Module: 1 Error Message: Mutex: 0x20000578, Not allowed in ISR context
Location: 0x800E6DD
Error Value: 0x20000578
Current Thread: main Id: 0x20002018 Entry: 0x800B90D StackSize: 0x1000 StackMem: 0x200008E0 SP: 0x20001600
For more info, visit: https://mbed.com/s/error?error=0x80010133&tgt=NUCLEO_L476RG -- MbedOS Error Info -- = System will be rebooted due to a fatal error =
= Reboot count(=1) reached maximum, system will halt after rebooting
So, I thought maybe this are solutions:
1) to rewrite library somehow so it will work with MBED OS 5 (I am not sure what exactly have to be modified)
2) use mbed os 2 in Mbed Studio (not sure if it is possible)
X_NUCLEO_IHM03A1 library - https://os.mbed.com/teams/ST/code/X_NUCLEO_IHM03A1/
How to solve the problem so compiled in Mbed Studio project for X_NUCLEO_IHM03A1 could work?
Commenting the line __disable_irq(); solved the problem
Thank you Nils4526
in my case the function was in the PowerStep01.h file and looked like this:
void Powerstep01_Board_DisableIrq(void)
{
// __disable_irq();
}
I got the same error code that you have but with the expansion board IHM01A1 and the Nucleo board F411RE. I don't know if this will work for your board but I think it is worth a try. The names are different but besides that the code looks similar.
Using the bare metal profile works without any modification but this change works with OS5 as well.
In the file Components/L6474/L6474.h comment out the following line:
void L6474_DisableIrq(void)
{
// __disable_irq();
}
This function is called in the main file when reading or writing with SPI.
I don't know why it works since the Mbed API asks you to disable interrupts before using the SPI write function, but somehow removing this line that disables the interrupt makes it work.

Trouble enabling libuv compilation with libwebsockets

I want to use libwebsockets in a foreign libuv loop.
My code (inspired from this simple example) compiles and links correctly, but at execution, on webpage request, the browser never receives a response from the server.
I build both libwebsockets (v3.1.0) and libuv (v1.25.0) from the sources in my cmake. I use the following command line:
cmake -DLWS_WITH_LIBUV=1 .. && make
And the cmake ouput mentions the correct value for the option:
LWS_WITH_LIBEV = OFF
LWS_WITH_LIBUV = 1
LWS_WITH_LIBEVENT = OFF
Grepping for the option in the build directory gives the following (which looks ok too):
CMakeCache.txt:483:LWS_WITH_LIBUV:BOOL=ON
extern/libwebsockets/include/libwebsockets/lws-service.h:185:#ifdef LWS_WITH_LIBUV
extern/libwebsockets/include/libwebsockets/lws-service.h:209:#endif /* LWS_WITH_LIBUV */
extern/libwebsockets/include/libwebsockets.h:157:#ifdef LWS_WITH_LIBUV
extern/libwebsockets/include/libwebsockets.h:165:#endif /* LWS_WITH_LIBUV */
extern/libwebsockets/include/lws_config.h:72:#define LWS_WITH_LIBUV
extern/libwebsockets/lws_config.h:72:#define LWS_WITH_LIBUV
However, with the following code (the closest I have from a minimal (not) working example) no message is displayed.
#include <uv.h>
int main()
{
#ifdef LWS_WITH_LIBUV
std::cout<<"With libuv"<<std::endl;
#endif
}
I've looked here and here and I do not know what to do next.
Turns out I had libwebsockets installed on my system and was linking against this system library, not compiled with libuv support.

regsvr32 fails for simple freepascal COM dll

I'm completely new to free-pascal and I try to implement a simple dll that should register a COM class.
Unfortunately I could only find little information about COM Programming for freepascal. Thus I hope that someone here can give me some hints or even a link to some examples.
So here is what I did:
my operating system is Windows 7 64 bit
downloaded and installed Lazarus 32bit version
Version #: 1.2.6
Date: 2014-10-11
FPC: Version 2.6.4
SVN Revision: 46529
i386-win32-win32/win64
installed the ActiveX package in Lazarus
made a new project - type Library with a simple TAutoObject and a default TAutoObjectFactory for the COM registration: source code included after this description
build the dll
use regsvr32.exe to register my dll --> this fails with
"make sure the binary is stored at the specified path ..."
Invalid access to memory location.
then I tried to change the default project options:
under Compiler Options - Config and Target, I set
Target OS: Win32
Target CPU family: i386
still the same error occurs
Project source
library LazarusSimpleComRegTest;
{$mode objfpc}{$H+}
uses
Classes,
{ you can add units after this }
ComServ, MyComObj;
exports
DllGetClassObject,
DllCanUnloadNow,
DllRegisterServer,
DllUnregisterServer;
end.
MyComObj Unit:
unit MyComObj;
{$mode objfpc}{$H+}
interface
uses
Classes, SysUtils, ComObj;
const
CLASS_Plugin: TGUID = '{5E020FB0-B593-4ADF-9288-801C2FD432CF}';
type
TPlugin = class(TAutoObject)
end;
implementation
uses ComServ;
initialization
TAutoObjectFactory.Create(ComServer, TPlugin, CLASS_Plugin,
ciMultiInstance, tmApartment);
end.
I think the main problem was, that I did not include the type library as a resource in my dll file: Now it works fine.
I've made a very basic and simple working example on git-hub with some basic documentation:
lazarus-com-example

Compiling GCDAsyncSocket for OS X 10.6

I have troubles with compiling this wonderful TCP library with SDK 10.6.
I get:
/Users/cisary/Desktop/AI/AI/TCP/GCDAsyncSocket.m:185:11: error: instance variables may not be placed in class extension
uint32_t flags;
^
/Users/cisary/Desktop/AI/AI/TCP/GCDAsyncSocket.m:186:11: error: instance variables may not be placed in class extension
uint16_t config;
What does instance variables may not be placed in class extension mean?
Are you attempting to compile for i386? That will not work with GCDAsyncSocket.m as I'm finding it on github. Instance variables defined in class extensions are a modern runtime feature. Try compiling for x86_64.

Knowing if the app is running in a testing environment

Is there a way to know if the program is running in the development environment? I'm using Flurry Analytics and want to pass it a different app id, so the data doesn't get dirty with my tests during development.
What I'd like is something like this:
Boolean isDevEnv = .... (is this a test in the simulator or device,
OR is it a real user that downloaded the
app through the app store?)
if (isDevEnv)
[FlurryAnalytics startSession:#"firstAppId"];
else
[FlurryAnalytics startSession:#"secondAppId"];
To be clear, this is not what I'm after, because I test using a real device as well as the simulator.
In the build settings you'll have to set flags, depending on the building env.
Then, use #ifdef and #define to set the appid.
#ifdef DEBUG
# define APPID ...
#else
# define APPID ...
#endif
In your build settings, define a new flag for the App Store release version. Then use #ifdef to determine at compile time which appid to use.
if you don't want to use DEBUG flag and DEBUG environment, create a new build configuration (duplicate Release configuration) and in the build settings Preprocessor Macros add a FlurryAnalytics flag. In your code check if(FlurryAnalytics). Create a new scheme in XCode that creates ipa using this new release build configuration.
Well, it seems this is done by default by Xode, in the Project's Build Settings, under Apple LLVM compiler 3.1 - Preprocessing (this is in Xcode 4.3.2, for future reference), a setting called DEBUG is populated with the value 1.
So, I didn't really have to do anything, just this in the code (in my case in the AppDelegate's didFinishLaunchingWithOptions method):
[FlurryAnalytics startSession:DEBUG ? #"firstAppId" : #"secondAppId"];