Notification when wifi connected (OS X) - objective-c

I need a notification from the system when airport is connecting to an ap. Is there any possibility to do that with the SystemConfiguration framework? I have problems to understand the systemconfigurations api documentation.

You are on the right track with the SystemConfiguration network, which offers the SCNetworkReachability set of functions. You could try using
SCNetworkReachabilitySetCallback
to set a callback which will be called when the reachability changes; and
SCNetworkReachabilityScheduleWithRunLoop
to schedule the reachability check within the run loop
Or you might try using a reachability framework (both for MacOS and iOS) which is built on top of the SystemConfiguration framework to make things even easier (higher-level).
If you want to go the SystemConfiguration way, this is how you check for present reachability and install the callback to be notified of changes (source):
- (void)checkReachability {
NSString *server = [[NSUserDefaults standardUserDefaults] stringForKey:#"NCIDServer"];
if (server == nil) {
ncid_message_callback(self, [NSLocalizedString(#"No caller ID server was specified.", nil) UTF8String]);
return;
}
const char *serverName = [[[server componentsSeparatedByString:#":"] objectAtIndex:0] UTF8String];
SCNetworkReachabilityContext context = {0, (void *)self, NULL, NULL, NULL};
networkReachability = SCNetworkReachabilityCreateWithName(NULL, serverName);
if (networkReachability == NULL)
goto fail;
// If reachability information is available now, we don't get a callback later
SCNetworkConnectionFlags flags;
if (SCNetworkReachabilityGetFlags(networkReachability, &flags))
networkReachabilityCallback(networkReachability, flags, self);
if (!SCNetworkReachabilitySetCallback(networkReachability, networkReachabilityCallback, &context))
goto fail;
if (!SCNetworkReachabilityScheduleWithRunLoop(networkReachability, [[NSRunLoop currentRunLoop] getCFRunLoop], kCFRunLoopCommonModes))
goto fail;
return;
fail:
if (networkReachability != NULL)
CFRelease(networkReachability);
networkReachability = NULL; //-- ivar representing current reachability
}
And this is a sample of the callback:
static void networkReachabilityCallback(SCNetworkReachabilityRef target,
SCNetworkConnectionFlags flags,
void *object) {
// Observed flags:
// - nearly gone: kSCNetworkFlagsReachable alone (ignored)
// - gone: kSCNetworkFlagsTransientConnection | kSCNetworkFlagsReachable | kSCNetworkFlagsConnectionRequired
// - connected: kSCNetworkFlagsIsDirect | kSCNetworkFlagsReachable
if (networkReachability == NULL)
return;
if ((flags & kSCNetworkFlagsReachable) && !(flags & kSCNetworkFlagsConnectionRequired)) {
if (isReachable) // typically receive a reachable message ~20ms before the unreachable one
return;
isReachable = YES;
ncid_network_kill();
[NSThread detachNewThreadSelector:#selector(runThread:) toTarget:object withObject:nil];
} else {
isReachable = NO;
ncid_network_kill();
}
}

You need to use Apple's Reachability class for this. http://developer.apple.com/library/ios/#samplecode/Reachability/Listings/Classes_Reachability_m.html#//apple_ref/doc/uid/DTS40007324-Classes_Reachability_m-DontLinkElementID_6
You can check this thread for how to use this class-
How do I receive notifications that the connection has changed type (3G, Edge, Wifi, GPRS)

Related

How to properly close a FFmpeg stream and AVFormatContext without leaking memory?

I have built an app that uses FFmpeg to connect to remote IP cameras in order to receive video and audio frames via RTSP 2.0.
The app is built using Xcode 10-11 and Objective-C with a custom FFmpeg build config.
The architecture is the following:
MyApp
Document_0
RTSPContainerObject_0
RTSPObject_0
RTSPContainerObject_1
RTSPObject_1
...
Document_1
...
GOAL:
After closing Document_0 no FFmpeg objects should be leaked.
The closing process should stop-frame reading and destroy all objects which use FFmpeg.
PROBLEM:
Somehow Xcode's memory debugger shows two instances of MyApp.
FACTS:
macOS'es Activity Monitor doesn't show two instances of MyApp.
macOS'es Activity Monitor doesn't any instances of FFmpeg or other child processes.
The issue is not related to some leftover memory due to a late memory snapshot since it can be reproduced easily.
Xcode's memory debugger shows that the second instance only having RTSPObject's AVFormatContext and no other objects.
The second instance has an AVFormatContext and the RTPSObject still has a pointer to the AVFormatContext.
FACTS:
Opening and closing the second document Document_1 leads to the same problem and having two objects leaked. This means that there is a bug that creates scalable problems. More and more memory is used and unavailable.
Here is my termination code:
- (void)terminate
{
// * Video and audio frame provisioning termination *
[self stopVideoStream];
[self stopAudioStream];
// *
// * Video codec termination *
avcodec_free_context(&_videoCodecContext); // NULL pointer safe.
self.videoCodecContext = NULL;
// *
// * Audio codec termination *
avcodec_free_context(&_audioCodecContext); // NULL pointer safe.
self.audioCodecContext = NULL;
// *
if (self.packet)
{
// Free the packet that was allocated by av_read_frame.
av_packet_unref(&packet); // The documentation doesn't mention NULL safety.
self.packet = NULL;
}
if (self.currentAudioPacket)
{
av_packet_unref(_currentAudioPacket);
self.currentAudioPacket = NULL;
}
// Free raw frame data.
av_freep(&_rawFrameData); // NULL pointer safe.
// Free the swscaler context swsContext.
self.isFrameConversionContextAllocated = NO;
sws_freeContext(scallingContext); // NULL pointer safe.
[self.audioPacketQueue removeAllObjects];
self.audioPacketQueue = nil;
self.audioPacketQueueLock = nil;
self.packetQueueLock = nil;
self.audioStream = nil;
BXLogInDomain(kLogDomainSources, kLogLevelVerbose, #"%s:%d: All streams have been terminated!", __FUNCTION__, __LINE__);
// * Session context termination *
AVFormatContext *pFormatCtx = self.sessionContext;
BOOL shouldProceedWithInputSessionTermination = self.isInputStreamOpen && self.shouldTerminateStreams && pFormatCtx;
NSLog(#"\nTerminating session context...");
if (shouldProceedWithInputSessionTermination)
{
NSLog(#"\nTerminating...");
//av_write_trailer(pFormatCtx);
// Discard all internally buffered data.
avformat_flush(pFormatCtx); // The documentation doesn't mention NULL safety.
// Close an opened input AVFormatContext and free it and all its contents.
// WARNING: Closing an non-opened stream will cause avformat_close_input to crash.
avformat_close_input(&pFormatCtx); // The documentation doesn't mention NULL safety.
NSLog(#"Logging leftovers - %p, %p %p", self.sessionContext, _sessionContext, pFormatCtx);
avformat_free_context(pFormatCtx);
NSLog(#"Logging content = %c", *self.sessionContext);
//avformat_free_context(pFormatCtx); - Not needed because avformat_close_input is closing it.
self.sessionContext = NULL;
}
// *
}
IMPORTANT: The termination sequence is:
New frame will be read.
-[(RTSPObject)StreamInput currentVideoFrameDurationSec]
-[(RTSPObject)StreamInput frameDuration:]
-[(RTSPObject)StreamInput currentCGImageRef]
-[(RTSPObject)StreamInput convertRawFrameToRGB]
-[(RTSPObject)StreamInput pixelBufferFromImage:]
-[(RTSPObject)StreamInput cleanup]
-[(RTSPObject)StreamInput dealloc]
-[(RTSPObject)StreamInput stopVideoStream]
-[(RTSPObject)StreamInput stopAudioStream]
Terminating session context...
Terminating...
Logging leftovers - 0x109ec6400, 0x109ec6400 0x109ec6400
Logging content = \330
-[Document dealloc]
NOT WORKING SOLUTIONS:
Changing the order of object releases (The AVFormatContext has been freed first but it didn't lead to any change).
Calling RTSPObject's cleanup method much sooner to give FFmpeg more time to handle object releases.
Reading a lot of SO answers and FFmpeg documentation to find a clean cleanup process or newer code which might highlight why the object release doesn't happen properly.
I am currently reading the documentation on AVFormatContext since I believe that I am forgetting to release something. This believe is based on the memory debuggers output that AVFormatContext is still around.
Here is my creation code:
#pragma mark # Helpers - Start
- (NSError *)openInputStreamWithVideoStreamId:(int)videoStreamId
audioStreamId:(int)audioStreamId
useFirst:(BOOL)useFirstStreamAvailable
inInit:(BOOL)isInitProcess
{
// NSLog(#"%s", __PRETTY_FUNCTION__); // RTSP
self.status = StreamProvisioningStatusStarting;
AVCodec *decoderCodec;
NSString *rtspURL = self.streamURL;
NSString *errorMessage = nil;
NSError *error = nil;
self.sessionContext = NULL;
self.sessionContext = avformat_alloc_context();
AVFormatContext *pFormatCtx = self.sessionContext;
if (!pFormatCtx)
{
// Create approp error.
return error;
}
// MUST be called before avformat_open_input().
av_dict_free(&_sessionOptions);
self.sessionOptions = 0;
if (self.usesTcp)
{
// "rtsp_transport" - Set RTSP transport protocols.
// Allowed are: udp_multicast, tcp, udp, http.
av_dict_set(&_sessionOptions, "rtsp_transport", "tcp", 0);
}
av_dict_set(&_sessionOptions, "rtsp_transport", "tcp", 0);
// Open an input stream and read the header with the demuxer options.
// WARNING: The stream must be closed with avformat_close_input()
if (avformat_open_input(&pFormatCtx, rtspURL.UTF8String, NULL, &_sessionOptions) != 0)
{
// WARNING: Note that a user-supplied AVFormatContext (pFormatCtx) will be freed on failure.
self.isInputStreamOpen = NO;
// Create approp error.
return error;
}
self.isInputStreamOpen = YES;
// user-supplied AVFormatContext pFormatCtx might have been modified.
self.sessionContext = pFormatCtx;
// Retrieve stream information.
if (avformat_find_stream_info(pFormatCtx,NULL) < 0)
{
// Create approp error.
return error;
}
// Find the first video stream
int streamCount = pFormatCtx->nb_streams;
if (streamCount == 0)
{
// Create approp error.
return error;
}
int noStreamsAvailable = pFormatCtx->streams == NULL;
if (noStreamsAvailable)
{
// Create approp error.
return error;
}
// Result. An Index can change, an identifier shouldn't.
self.selectedVideoStreamId = STREAM_NOT_FOUND;
self.selectedAudioStreamId = STREAM_NOT_FOUND;
// Fallback.
int firstVideoStreamIndex = STREAM_NOT_FOUND;
int firstAudioStreamIndex = STREAM_NOT_FOUND;
self.selectedVideoStreamIndex = STREAM_NOT_FOUND;
self.selectedAudioStreamIndex = STREAM_NOT_FOUND;
for (int i = 0; i < streamCount; i++)
{
// Looking for video streams.
AVStream *stream = pFormatCtx->streams[i];
if (!stream) { continue; }
AVCodecParameters *codecPar = stream->codecpar;
if (!codecPar) { continue; }
if (codecPar->codec_type==AVMEDIA_TYPE_VIDEO)
{
if (stream->id == videoStreamId)
{
self.selectedVideoStreamId = videoStreamId;
self.selectedVideoStreamIndex = i;
}
if (firstVideoStreamIndex == STREAM_NOT_FOUND)
{
firstVideoStreamIndex = i;
}
}
// Looking for audio streams.
if (codecPar->codec_type==AVMEDIA_TYPE_AUDIO)
{
if (stream->id == audioStreamId)
{
self.selectedAudioStreamId = audioStreamId;
self.selectedAudioStreamIndex = i;
}
if (firstAudioStreamIndex == STREAM_NOT_FOUND)
{
firstAudioStreamIndex = i;
}
}
}
// Use first video and audio stream available (if possible).
if (self.selectedVideoStreamIndex == STREAM_NOT_FOUND && useFirstStreamAvailable && firstVideoStreamIndex != STREAM_NOT_FOUND)
{
self.selectedVideoStreamIndex = firstVideoStreamIndex;
self.selectedVideoStreamId = pFormatCtx->streams[firstVideoStreamIndex]->id;
}
if (self.selectedAudioStreamIndex == STREAM_NOT_FOUND && useFirstStreamAvailable && firstAudioStreamIndex != STREAM_NOT_FOUND)
{
self.selectedAudioStreamIndex = firstAudioStreamIndex;
self.selectedAudioStreamId = pFormatCtx->streams[firstAudioStreamIndex]->id;
}
if (self.selectedVideoStreamIndex == STREAM_NOT_FOUND)
{
// Create approp error.
return error;
}
// See AVCodecID for codec listing.
// * Video codec setup:
// 1. Find the decoder for the video stream with the gived codec id.
AVStream *stream = pFormatCtx->streams[self.selectedVideoStreamIndex];
if (!stream)
{
// Create approp error.
return error;
}
AVCodecParameters *codecPar = stream->codecpar;
if (!codecPar)
{
// Create approp error.
return error;
}
decoderCodec = avcodec_find_decoder(codecPar->codec_id);
if (decoderCodec == NULL)
{
// Create approp error.
return error;
}
// Get a pointer to the codec context for the video stream.
// WARNING: The resulting AVCodecContext should be freed with avcodec_free_context().
// Replaced:
// self.videoCodecContext = pFormatCtx->streams[self.selectedVideoStreamIndex]->codec;
// With:
self.videoCodecContext = avcodec_alloc_context3(decoderCodec);
avcodec_parameters_to_context(self.videoCodecContext,
codecPar);
self.videoCodecContext->thread_count = 4;
NSString *description = [NSString stringWithUTF8String:decoderCodec->long_name];
// 2. Open codec.
if (avcodec_open2(self.videoCodecContext, decoderCodec, NULL) < 0)
{
// Create approp error.
return error;
}
// * Audio codec setup:
if (self.selectedAudioStreamIndex > -1)
{
[self setupAudioDecoder];
}
// Allocate a raw video frame data structure. Contains audio and video data.
self.rawFrameData = av_frame_alloc();
self.outputWidth = self.videoCodecContext->width;
self.outputHeight = self.videoCodecContext->height;
if (!isInitProcess)
{
// Triggering notifications in init process won't change UI since the object is created locally. All
// objects which need data access to this object will not be able to get it. Thats why we don't notifiy anyone about the changes.
[NSNotificationCenter.defaultCenter postNotificationName:NSNotification.rtspVideoStreamSelectionChanged
object:nil userInfo: self.selectedVideoStream];
[NSNotificationCenter.defaultCenter postNotificationName:NSNotification.rtspAudioStreamSelectionChanged
object:nil userInfo: self.selectedAudioStream];
}
return nil;
}
UPDATE 1
The initial architecture allowed using any given thread. Most of the below code would mostly run on the main thread. This solution was not appropriate since the opening of the stream input can take several seconds for which the main thread is blocked while waiting for a network response inside FFmpeg. To solve this issue I have implemented the following solution:
Creation and the initial setup are only allowed on the background_thread (see code snippet "1" below).
Changes are allowed on the current_thread(Any).
Termination is allowed on the current_thread(Any).
After removing main thread checks and dispatch_asyncs to background threads, leaking has stopped and I can't reproduce the issue anymore:
// Code that produces the issue.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// 1 - Create and do initial setup.
// This block creates the issue.
[self.rtspObject = [[RTSPObject alloc] initWithURL: ... ];
[self.rtspObject openInputStreamWithVideoStreamId: ...
audioStreamId: ...
useFirst: ...
inInit: ...];
});
I still don't understand why Xcode's memory debugger says that this block is retained?
Any advice or idea is welcome.
If you use av_format_open_input to open a file, you must use avformat_close_input to free it. Using free_context will leak all io related allocations.

Find out if Mac is Force Touch - capable

Is it possible to find out if a Mac is Force Touch capable - either via a built-in Trackpad, like the new MacBook, or a Bluetooth device like the Magic Trackpad 2?
I'd like to present preferences specific to Force Touch if the Mac is Force Touch capable, but not display (or disable) those preferences if Force Touch is not available.
In the portion after the separator, you see the options I have in mind in the pic linked here. (sorry, embedding the pic itself didn't work).
So, not showing the preferences wouldn't restrict users who don't have force touch, it would just let users who have it configure how it should work, and those settings would be useless to users who don't have it.
Is there a way to achieve this?
Thank you and kind regards,
Matt
Edit: It's in Objective-C.
I figured it out:
+ (BOOL)isForceTouchCapable
{
if (![[self class] isAtLeastElCapitan])
return NO;
io_iterator_t iterator;
//get default HIDDevice dictionary
CFMutableDictionaryRef mDict = IOServiceMatching(kIOHIDDeviceKey);
//add manufacturer "Apple Inc." to dict
CFDictionaryAddValue(mDict, CFSTR(kIOHIDManufacturerKey), CFSTR("Apple Inc."));
//get matching services, depending on dict
IOReturn ioReturnValue = IOServiceGetMatchingServices(kIOMasterPortDefault, mDict, &iterator);
BOOL result = YES;
if (ioReturnValue != kIOReturnSuccess)
NSLog(#"error getting matching services for force touch devices");
else
{
//recursively go through each device found and its children and grandchildren, etc.
result = [[self class] _containsForceTouchDevice:iterator];
IOObjectRelease(iterator);
}
return result;
}
+ (BOOL)_containsForceTouchDevice:(io_iterator_t)iterator
{
io_object_t object = 0;
BOOL success = NO;
while ((object = IOIteratorNext(iterator)))
{
CFMutableDictionaryRef result = NULL;
kern_return_t state = IORegistryEntryCreateCFProperties(object, &result, kCFAllocatorDefault, 0);
if (state == KERN_SUCCESS && result != NULL)
{
if (CFDictionaryContainsKey(result, CFSTR("DefaultMultitouchProperties")))
{
CFDictionaryRef dict = CFDictionaryGetValue(result, CFSTR("DefaultMultitouchProperties"));
CFTypeRef val = NULL;
if (CFDictionaryGetValueIfPresent(dict, CFSTR("ForceSupported"), &val))
{
Boolean aBool = CFBooleanGetValue(val);
if (aBool) //supported
success = YES;
}
}
}
if (result != NULL)
CFRelease(result);
if (success)
{
IOObjectRelease(object);
break;
} else
{
io_iterator_t childIterator = 0;
kern_return_t err = IORegistryEntryGetChildIterator(object, kIOServicePlane, &childIterator);
if (!err)
{
success = [[self class] _containsForceTouchDevice:childIterator];
IOObjectRelease(childIterator);
} else
success = NO;
IOObjectRelease(object);
}
}
return success;
}
Just call + (BOOL)isForceTouchCapable and it will return YES if a Force Touch device is available (a Magic Trackpad 2 or a built in force-touch-trackpad) or NO if there isn't.
For those interested in how this came to be, I wrote about it on my blog with an example project.

No Internet Connection Handling on UIWebView and NSURLRequest [duplicate]

This question already has answers here:
How can I check for an active Internet connection on iOS or macOS?
(46 answers)
Closed 9 years ago.
I have an app which is entirely web-based and needs an internet connection to navigate around. Basically a website viewed through a UIWebView.
I need to be able to tell the user that no pages can load if they have no internet connection. Is there a simple way I can do this. Perhaps a check if NSURLRequest failed?
Cheers
I would look at Apple's Reachability sample code to implement this reliably. One advantage of this approach is that you can notify the user as to current network status even the user isn't actually clicking on any links in the web view.
please check the following
stackoverflow1
stackoverflow2
stackoverflow3
1>Add SystemConfiguration.framework to your project
2>import following .h files in your Connection.h file
#import <sys/socket.h>
#import <netinet/in.h>
#import <SystemConfiguration/SystemConfiguration.h>
3>declare the following class method in your Connection.h file
+(BOOL)hasConnectivity;
4>define this method in your Connection.m file
+(BOOL)hasConnectivity {
struct sockaddr_in zeroAddress;
bzero(&zeroAddress, sizeof(zeroAddress));
zeroAddress.sin_len = sizeof(zeroAddress);
zeroAddress.sin_family = AF_INET;
SCNetworkReachabilityRef reachability = SCNetworkReachabilityCreateWithAddress(kCFAllocatorDefault, (const struct sockaddr*)&zeroAddress);
if(reachability != NULL) {
//NetworkStatus retVal = NotReachable;
SCNetworkReachabilityFlags flags;
if (SCNetworkReachabilityGetFlags(reachability, &flags)) {
if ((flags & kSCNetworkReachabilityFlagsReachable) == 0)
{
// if target host is not reachable
return NO;
}
if ((flags & kSCNetworkReachabilityFlagsConnectionRequired) == 0)
{
// if target host is reachable and no connection is required
// then we'll assume (for now) that your on Wi-Fi
return YES;
}
if ((((flags & kSCNetworkReachabilityFlagsConnectionOnDemand ) != 0) ||
(flags & kSCNetworkReachabilityFlagsConnectionOnTraffic) != 0))
{
// ... and the connection is on-demand (or on-traffic) if the
// calling application is using the CFSocketStream or higher APIs
if ((flags & kSCNetworkReachabilityFlagsInterventionRequired) == 0)
{
// ... and no [user] intervention is needed
return YES;
}
}
if ((flags & kSCNetworkReachabilityFlagsIsWWAN) == kSCNetworkReachabilityFlagsIsWWAN)
{
// ... but WWAN connections are OK if the calling application
// is using the CFNetwork (CFSocketStream?) APIs.
return YES;
}
}
}
return NO;
}

Threading and Sockets in Objective-C

NOTE: I've edited my question. I've got it to connect and perform the first callback, but subsequent callbacks don't go through at all.
This is my first time writing Objective-C (with GNUstep; it's for a homework assignment). I've got the solution working, but I am trying to add something more to it. The app is a GUI client that connects to a server and gets data from it. Multiple clients can connect to the same server. If any one of the clients changes data that is residing on the server, the server sends a callback to all registered clients. This solution was originally implemented in Java (both client and server) and for the latest assignment, the professor wanted us to write an Objective-C client for it. He said that we don't need to handle callbacks, but I wanted to try anyway.
I am using NSThread and I wrote something that looks like this:
CallbackInterceptorThread.h
#import <Foundation/Foundation.h>
#import "AppDelegate.h"
#interface CallbackInterceptorThread : NSThread {
#private
NSString* clientPort;
AppDelegate* appDelegate;
}
- (id) initWithClientPort: (NSString*) aClientPort
appDelegate: (AppDelegate*) anAppDelegate;
- (void) main;
#end
CallbackInterceptorThread.m
#import <Foundation/Foundation.h>
#import "CallbackInterceptorThread.h"
#define MAXDATASIZE 4096
#implementation CallbackInterceptorThread
- (id) initWithClientPort: (NSString*) aClientPort
appDelegate: (AppDelegate*) anAppDelegate {
if((self = [super init])) {
[clientPort autorelease];
clientPort = [aClientPort retain];
[appDelegate autorelease];
appDelegate = [anAppDelegate retain];
}
return self;
}
- (void) main {
GSRegisterCurrentThread();
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
char* buffer = malloc(MAXDATASIZE);
Cst420ServerSocket* socket = [[Cst420ServerSocket alloc] initWithPort: clientPort];
[socket retain];
NSString* returnString;
while(YES) {
printf("Client waiting for callbacks on port %s\n", [clientPort cString]);
if([socket accept]) {
printf("Connection accepted!\n");
while(YES) {
printf("Inner loop\n");
sleep(1);
returnString = [socket receiveBytes: buffer maxBytes: MAXDATASIZE beginAt: 0];
printf("Received from Server |%s|\n", [returnString cString]);
if([returnString length] > 0) {
printf("Got a callback from server\n");
[appDelegate populateGui];
}
printf("Going to sleep now\n");
sleep(1);
}
[socket close];
}
}
}
#end
Cst420ServerSocket has been provided to us by the instructor. It looks like this:
#import "Cst420Socket.h"
#define PORT "4444"
/**
* Cst420Socket.m - objective-c class for manipulating stream sockets.
* Purpose: demonstrate stream sockets in Objective-C.
* These examples are buildable on MacOSX and GNUstep on top of Windows7
*/
// get sockaddr, IPv4 or IPv6:
void *get_in_addr(struct sockaddr *sa){
if (sa->sa_family == AF_INET) {
return &(((struct sockaddr_in*)sa)->sin_addr);
}
return &(((struct sockaddr_in6*)sa)->sin6_addr);
}
#implementation Cst420ServerSocket
- (id) initWithPort: (NSString*) port{
self = [super init];
int ret = 0;
memset(&hints, 0, sizeof hints);
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = AI_PASSIVE; // use my IP
const char* portStr = [port UTF8String];
if ((rv = getaddrinfo(NULL, portStr, &hints, &servinfo)) != 0) {
fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv));
ret = 1;
}else{
for(p = servinfo; p != NULL; p = p->ai_next) {
if ((sockfd = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP))==-1){
perror("server: socket create error");
continue;
}
if (bind(sockfd, p->ai_addr, p->ai_addrlen) == -1) {
#if defined(WINGS)
closesocket(sockfd);
#else
close(sockfd);
#endif
perror("server: bind error");
continue;
}
break;
}
if (p == NULL) {
fprintf(stderr, "server: failed to bind\n");
ret = 2;
}else{
freeaddrinfo(servinfo); // all done with this structure
if (listen(sockfd, BACKLOG) == -1) {
perror("server: listen error");
ret = 3;
}
}
if (ret == 0){
return self;
} else {
return nil;
}
}
}
- (BOOL) accept {
BOOL ret = YES;
#if defined(WINGS)
new_fd = accept(sockfd, NULL, NULL);
#else
new_fd = accept(sockfd, (struct sockaddr *)&their_addr, &sin_size);
#endif
if (new_fd == -1) {
perror("server: accept error");
ret = NO;
}
connected = ret;
return ret;
}
- (int) sendBytes: (char*) byteMsg OfLength: (int) msgLength Index: (int) at{
int ret = send(new_fd, byteMsg, msgLength, 0);
if(ret == -1){
NSLog(#"error sending bytes");
}
return ret;
}
- (NSString* ) receiveBytes: (char*) byteMsg
maxBytes: (int) max
beginAt: (int) at {
int ret = recv(new_fd, byteMsg, max-1, at);
if(ret == -1){
NSLog(#"server error receiving bytes");
}
byteMsg[ret+at] = '\0';
NSString * retStr = [NSString stringWithUTF8String: byteMsg];
return retStr;
}
- (BOOL) close{
#if defined(WINGS)
closesocket(new_fd);
#else
close(new_fd);
#endif
connected = NO;
return YES;
}
- (void) dealloc {
#if defined(WINGS)
closesocket(sockfd);
#else
close(sockfd);
#endif
[super dealloc];
}
#end
Our professor also provided us an example of a simple echo server and client (the server just spits back whatever the client sent it) and I've used the same pattern in the thread.
My initial problem was that my callback interceptor thread didn't accept any (callback) connections from the server. The server said that it could not connect back to the client (ConnectException from Java; it said "Connection refused"). I was able to fix this by changing my instructor's code. In the connect function (not shown), he had set the hints to use AF_UNSPEC instead of AF_INET. So Java was seeing my localhost IP come through as 0:0:0:0:0:0:0:1 (in IPv6 format). When Java tried to connect back to send a callback, it received an exception (not sure why it cannot connect to an IPv6 address).
After fixing this problem, I tried out my app again and this time the callback from the server was received by my client. However, subsequent callbacks fail to work. After receiving the first callback, the busy-loop keeps running (as it should). But when the server sends a second callback, it looks like the client cannot read it in. On the server side I can see that it sent the callback to the client successfully. It's just that the client is having trouble reading in the data. I added some print statements (see above) for debugging and this is what I get:
Client waiting for callbacks on port 2020
Connection accepted!
Inner loop
Received from Server |A callback from server to 127.0.0.1:2020|
Got a callback from server
Going to sleep now
Inner loop
Received from Server ||
Going to sleep now
Inner loop
Received from Server ||
Going to sleep now
Inner loop
... (and it keeps going regardless of the second callback being sent)
Here is how I am starting the thread (from the GUI):
CallbackInterceptorThread* callbackInterceptorThread = [[CallbackInterceptorThread alloc] initWithClientPort: clientPort appDelegate: self];
[callbackInterceptorThread start];
I think I've got it working. So from the Java side (the server), this was what I was doing:
Socket socket = new Socket(clientAddress, clientPort);
BufferedOutputStream out = new BufferedOutputStream(socket.getOutputStream());
out.write(("A callback from server to " + clientAddress + ":" + clientPort).getBytes());
out.flush();
out.close();
I put some debugging print-statements in my professor's code and noticed that in receiveBytes, recv was returning 0. The return value of recv is the length of the message that it received. So it received a zero-length string. But a return value of 0 also means that the peer closed the connection properly (which is exactly what I had done from the Java side with out.close()). So I figured that if I needed to respond to the second callback, I would need to accept the connection again. So I changed my busy loop to this:
printf("Client waiting for callbacks on port %s\n", [clientPort cString]);
while([socket accept]) {
printf("Connection accepted!\n");
returnString = [socket receiveBytes: buffer maxBytes: MAXDATASIZE beginAt: 0];
printf("Received from Server |%s|\n", [returnString cString]);
if([returnString length] > 0) {
printf("Got a callback from server\n");
[appDelegate populateGui];
}
}
[socket close];
and that seemed to do the trick. I am not sure if this is the right way to do it, so I am open to suggestions for improvement!

Can an objective-C NSThread access global variables?

Ok, basically I have a run loop going in my application every second or two, while at the same time I have another thread going that is looping through the listenForPackets method; broadcastMessage is only initiated when another action method takes place. The important part of this question is that when the listener thread is running seperately from the main thread, it never print out any of the print commands and it seems to not allow access to a global variable that I have defined called recvMessage which lies outside of the interface and implementation sections.
In my code, I have it set up so that every time it runs through the main run loop, it updates a UILabel in my GUI. When the app is running, my label stays blank the whole time and never changes. I've double-checked the GUI and everything is linked up there correctly and my label is instantiated correctly too (I use the name "label" as an instance of UILabel in the code below). Does anyone have any ideas why my label is updating? The networking aspect of things is fine I believe, because I just got that done and everything is "talking" ok. Maybe it's a variable scope issue that I don't know about, or are separate threads allowed to access global variables, such as the one I have used below (rcvMessage)? I'm fairly new to multi-threaded applications but I don't believe it's really that hard to implement, using NSThread (only one line of code).
Global Variable
NSString *recvMessage;
Main Runloop - the section that updates the label every time it goes through the runloop
if (label.text != recvMessage)
label.text = recvMessage
Talker Method
-(void)broadcastMessage { // (NSString*)msg {
msg = #"From_Master";
NSLog(#"broadcastMessage - Stage 1");
mc_ttl = 15; // number of node hops the message is allowed to travel across the network
// define the port we will be using
mc_port = MYPORT;
// create a socket for sending to the multicast address
if ((sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) < 0) {
NSLog(#"ERROR: broadcastMessage - socket() failed");
return;
}
memset(&mc_addr, 0, sizeof(mc_addr));
mc_addr.sin_family = AF_INET;
mc_addr.sin_addr.s_addr = inet_addr(GROUP_ADDRESS);
mc_addr.sin_port = htons(MYPORT);
NSLog(#"broadcastMessage - Stage 2");
// set the TTL (time to live/hop count) for the send
if ((setsockopt(sock, IPPROTO_IP, IP_MULTICAST_TTL, &mc_ttl, sizeof(mc_ttl))) < 0) {
NSLog(#"ERROR: broadcastMessage - setsockopt() failed");
return;
}
NSLog(#"broadcastMessage - Stage 3");
// clear send buffer
memset(send_str, 0, sizeof(send_str));
// convert the message to a C string to send
[msg getCString:send_str maxLength:MAX_LEN encoding:NSASCIIStringEncoding];
//while (fgets(send_str, MAX_LEN, stdin)) {
NSLog(#"broadcastMessage - Stage 4");
NSLog(#"Message =");
printf(send_str);
// send string to multicast address
if ((sendto(sock, send_str, sizeof(send_str), 0, (struct sockaddr *)&mc_addr, sizeof(mc_addr))) < 0) {
NSLog(#"ERROR: broadcastMessage - sendto() sent incorrect number of bytes");
//return;
}
NSLog(#"Sent Message -");
printf(send_str);
NSLog(#"broadcastMessage - Stage 5");
// clear send buffer
memset(send_str, 0, sizeof(send_str));
NSLog(#"broadcastMessage - Stage 6 - Complete");
close(sock);
}
Listener Method
-(void)listenForPackets {
listeningFlag_on = 1; // allows reuse of the same socket
NSLog(#"listenForPackets - Stage 1");
if ((listeningSock = socket(AF_INET, SOCK_DGRAM,IPPROTO_UDP)) < 0) {
NSLog(#"ERROR: listenForPackets - socket() failed");
return;
// make the method return an int instead of void and use this statement to check for errors
}
NSLog(#"listenForPackets - Stage 2");
// set reuse port to on to allow multiple binds per host
if ((setsockopt(listeningSock, SOL_SOCKET, SO_REUSEADDR, &listeningFlag_on, sizeof(listeningFlag_on))) < 0) {
NSLog(#"ERROR: listenForPackets - setsockopt() Reuse failed");
return;
// make the method return an int instead of void and use this statement to check for errors
}
// construct a multicast address structure after erasing anything in the listeningmc_addr structure
memset(&listeningmc_addr, 0, sizeof(listeningmc_addr));
listeningmc_addr.sin_family = AF_INET;
listeningmc_addr.sin_addr.s_addr = htonl(INADDR_ANY); // different from sender
listeningmc_addr.sin_port = htons(MYPORT);
// bind multicast address to socket
if ((bind(listeningSock, (struct sockaddr *)&listeningmc_addr, sizeof(listeningmc_addr))) < 0) {
NSLog(#"ERROR: listenForPackets - bind() failed");
perror("Bind() -");
return; // make the method return an int instead of void and use this statement to check for errors
}
//*********************************************************************************
NSString *ipAddress = [[NSString alloc] initWithString:self.getIPAddress];
const char *tmp = [ipAddress UTF8String];
listeningMc_addr_str = tmp;
printf("%s\n", listeningMc_addr_str);
listeningMc_req.imr_multiaddr.s_addr = inet_addr(GROUP_ADDRESS);
listeningMc_req.imr_interface.s_addr = htonl(INADDR_ANY);
// send an ADD MEMBERSHIP message via setsockopt
if ((setsockopt(listeningSock, IPPROTO_IP, IP_ADD_MEMBERSHIP, &listeningMc_req, sizeof(listeningMc_req))) < 0) {
NSLog(#"ERROR: listenForPackets - setsockopt() failed");
int err = errno;
NSLog(#"errno - %i", err);
NSLog(#"Error = %s", strerror(err));
perror("ERROR");
return; // make the method return an int instead of void and use this statement to check for errors
}
NSLog(#"listenForPackets - Stage 3");
for (;;) { // loop forever
// clear the receive buffers & structs
memset(listeningRecv_str, 0, sizeof(listeningRecv_str));
listeningFrom_len = sizeof(listeningFrom_addr);
memset(&listeningFrom_addr, 0, listeningFrom_len);
NSLog(#"Test #1 Complete");
//msgStatus.text = #"Waiting...";
// block waiting to receive a packet
listeningFrom_len = sizeof(listeningmc_addr);
if ((listeningRecv_len = recvfrom(listeningSock, listeningRecv_str, MAX_LEN, 0, (struct sockaddr*)&listeningmc_addr, &listeningFrom_len)) < 0) {
NSLog(#"ERROR: listenForPackets - recvfrom() failed");
return; // make the method return an int instead of void and use this statement to check for errors
}
NSLog(#"Test #2 Complete - Received a Message =");
NSLog(#"listenForPackets - Stage 4");
// listeningRecv_str
**tmpString = [[NSString alloc] initWithCString:listeningRecv_str encoding:NSASCIIStringEncoding];
NSLog(#"Message Received =");
NSLog(tmpString);
recvMessage = tmpString;**
//}
// received string
printf("Received %d bytes from %s: ", listeningRecv_len, inet_ntoa(listeningFrom_addr.sin_addr));
printf("%s", listeningRecv_str);
//}
}
// send a DROP MEMBERSHIP message via setsockopt
if ((setsockopt(listeningSock, IPPROTO_IP, IP_DROP_MEMBERSHIP, (void*) &listeningMc_req, sizeof(listeningMc_req))) < 0) {
NSLog(#"ERROR: listenForPackets - setsockopt() drop membership failed");
//return 1; // make the method return an int instead of void and use this statement to check for errors
}
close(listeningSock);
NSLog(#"listenForPackets - Stage 5 - Complete");
}
Yes, all threads can access global variables. There are certainly problems with how you are using the global--you leak the NSString you create every time the variable is updated, and you are accessing the same memory from two threads without any access control--but there's nothing that would prevent the variable from being updated.
If none of your log messages are being printed, the problem is that the code is never being run, which is why the variable isn't changing. You should take a look at the code that is supposed to kick off this new thread.
Also note that updating any UI components, you need to use the method "performSelectorOnMainThread" to do any value setting of label text or any other GUI elements. The value will not update from background threads.