I am creating a project in Swift. I want to display the modelName. I am following below link to get the modelName
http://myiosdevelopment.blogspot.co.uk/2012/11/getting-device-model-number-whether-its.html
The code in the link is written in objective-c. But I am not sure how to import this in Swift.
#import <sys/utsname.h>
Please someone help
sys/utsname.h is imported into Swift by default, so you don't really need to import it from the bridging header. But using utsname from Swift is really painful though, as Swift imports fixed length C array as tuples. If you look into utsname.h, you see that the C struct members of utsname are all char array of 256 length:
#define _SYS_NAMELEN 256
struct utsname {
char sysname[_SYS_NAMELEN]; /* [XSI] Name of OS */
char nodename[_SYS_NAMELEN]; /* [XSI] Name of this network node */
char release[_SYS_NAMELEN]; /* [XSI] Release level */
char version[_SYS_NAMELEN]; /* [XSI] Version level */
char machine[_SYS_NAMELEN]; /* [XSI] Hardware type */
};
Which gets imported into Swift like this:
var _SYS_NAMELEN: Int32 { get }
struct utsname {
var sysname: (Int8, Int8, /* ... 254 more times "Int8, " here ... */) /* [XSI] Name of OS */
var nodename: (Int8, Int8, /* ... snip ... */ ) /* [XSI] Name of this network node */
var release: (Int8, Int8, /* ... snip ... */ ) /* [XSI] Release level */
var version: (Int8, Int8, /* ... snip ... */ ) /* [XSI] Version level */
var machine: (Int8, Int8, /* ... snip ... */ ) /* [XSI] Hardware type */
}
Yes, they're tuples with 256 Int8s. Which cases this hilarious autocompletion in Xcode:
Currently, there is no way to initialize an tuple in Swift without writing out all value, so initializing it as a local variable would be rather verbose, as you see above. There is also no way to convert the tuple to an array, so that huge tuple is also not very useful.
The easiest solution would be to implement it in Objective-C.
If you're dead set on using Swift, you can do this, but it's not pretty:
// Declare an array that can hold the bytes required to store `utsname`, initilized
// with zeros. We do this to get a chunk of memory that is freed upon return of
// the method
var sysInfo: [CChar] = Array(count: sizeof(utsname), repeatedValue: 0)
// We need to get to the underlying memory of the array:
let machine = sysInfo.withUnsafeMutableBufferPointer { (inout ptr: UnsafeMutableBufferPointer<CChar>) -> String in
// Call uname and let it write into the memory Swift allocated for the array
uname(UnsafeMutablePointer<utsname>(ptr.baseAddress))
// Now here is the ugly part: `machine` is the 5th member of `utsname` and
// each member member is `_SYS_NAMELEN` sized. We skip the the first 4 members
// of the struct which will land us at the memory address of the `machine`
// member
let machinePtr = advance(ptr.baseAddress, Int(_SYS_NAMELEN * 4))
// Create a Swift string from the C string
return String.fromCString(machinePtr)!
}
In Swift 4 you can just use the UIDevice model property:
func getPhoneModel() -> String {
return UIDevice.current.model
}
my 2 cents for Swift 5 if You want to call utsname:
func platform() -> String {
var systemInfo = utsname()
uname(&systemInfo)
let size = Int(_SYS_NAMELEN) // is 32, but posix AND its init is 256....
let s = withUnsafeMutablePointer(to: &systemInfo.machine) {p in
p.withMemoryRebound(to: CChar.self, capacity: size, {p2 in
return String(cString: p2)
})
}
return s
}
The code shown in that blog post looks like C and not Objective C - however I think you can write a wrapper around that in Objective-C
In order to enable bridging between Objective-C and swift just add a new Objective-C file to your project - Xcode will prompt you whether to create a bridging header
Just answer yes, and Xcode will automatically create a <appname>-Bridging-Header.h file. Open it and #include any objective-c header file that you want to use from swift.
In swift 2.0:
var sysInfo: [CChar] = Array(count: sizeof(utsname), repeatedValue: 0)
let deviceModel = sysInfo.withUnsafeMutableBufferPointer { (inout ptr: UnsafeMutableBufferPointer<CChar>) -> String in
uname(UnsafeMutablePointer<utsname>(ptr.baseAddress))
let machinePtr = ptr.baseAddress.advancedBy(Int(_SYS_NAMELEN * 4))
return String.fromCString(machinePtr)!
}
print(deviceModel)
Related
OK. Looked at the various answers when I typed out the question, and don't see the answer listed.
This seems such a basic, fundamental issue that I MUST be doing something wrong, but I am being driven crazy (actually, not much of a "drive." More of a short putt) trying to figure out what I am getting wrong.
I have created a test project that can be downloaded here (small project).
In any case, I encountered this, when working on importing a large Objective-C SDK that I wrote into Swift.
Everything works great. Until... I try to do comparisons on enums declared in C.
Obviously, C enums are not turned into Swift enums, but I am having trouble figuring out how to use them.
Here's a sample of what you will see in the test project;
I have a couple of C files that declare enums:
#import <Foundation/Foundation.h>
typedef enum
{
StandardEnumType_Undef = 0xFF,
StandardEnumType_Value0 = 0x00,
StandardEnumType_Value1 = 0x01,
StandardEnumType_Value2 = 0x02
} StandardEnumType;
typedef NS_ENUM ( unsigned char, AppleEnumType )
{
AppleEnumType_Undef = 0xFF,
AppleEnumType_Value0 = 0x00,
AppleEnumType_Value1 = 0x01,
AppleEnumType_Value2 = 0x02
};
StandardEnumType translateIntToEnumValue ( int inIntValue );
int translateEnumValueToInt ( StandardEnumType inValue );
AppleEnumType translateIntToAppleEnumValue ( int inIntValue );
int translateAppleEnumValueToInt ( AppleEnumType inValue );
The functions mentioned do pretty much what it says on the tin. I won't include them.
I did the bridging header and all that.
I am trying to use them in the initial load of the Swift app:
func application(application: UIApplication!, didFinishLaunchingWithOptions launchOptions: NSDictionary!) -> Bool
{
let testStandardConst:StandardEnumType = translateIntToEnumValue ( 0 )
// Nope.
// if ( testStandardConst.toRaw() == 0 )
// {
//
// }
// This don't work.
// if ( testStandardConst == StandardEnumType_Value0 )
// {
//
// }
// Neither does this.
// if ( testStandardConst == StandardEnumType_Value0.toRaw() )
// {
//
// }
// Not this one, either.
// if ( testStandardConst.toRaw() == StandardEnumType_Value0 )
// {
//
// }
let testAppleConst:AppleEnumType = translateIntToAppleEnumValue ( 0 )
// This don't work.
// if ( testAppleConst == AppleEnumType.AppleEnumType_Value0 )
// {
//
// }
// Neither does this.
// if ( testAppleConst == AppleEnumType.AppleEnumType_Value0.toRaw() )
// {
//
// }
// Nor this.
// if ( testAppleConst == .AppleEnumType_Value0 )
// {
//
// }
return true
}
I can't seem to get the $##!! enums to compare to either ints or anything else.
I am often humbled by the stupid errors I make, and sometimes need them pointed out to me, so please humble me.
Thanks!
When working with typedef NS_ENUM enums in swift you should omit enum name in comparison and assignment, here how you can compare it:
if ( testAppleConst == .Value0 ) { ... }
You can read more about this in Apple do s for Swift here.
i am new to spider monkey and want to use it for transform java script file to sequence of byte code.
i get spider monkey and build it in debug mode.
i want to use JS_CompileScript function in jsapi.h to compile javascript code and analysis this to get bytecode , but when in compile below code and run it , i get run time error.
the error is "Unhandled exception at 0x0f55c020 (mozjs185-1.0.dll) in spiderMonkeyTest.exe: 0xC0000005: Access violation reading location 0x00000d4c." and i do not resolve it.
any body can help me to resolve this or introducing other solutions to get byte code from javascript code by using spider monkey ?
// spiderMonkeyTest.cpp : Defines the entry point for the console application.
//
#define XP_WIN
#include <iostream>
#include <fstream>
#include "stdafx.h"
#include "jsapi.h"
#include "jsanalyze.h"
using namespace std;
using namespace js;
static JSClass global_class = { "global",
JSCLASS_NEW_RESOLVE | JSCLASS_GLOBAL_FLAGS,
JS_PropertyStub,
NULL,
JS_PropertyStub,
JS_StrictPropertyStub,
JS_EnumerateStub,
JS_ResolveStub,
JS_ConvertStub,
NULL,
JSCLASS_NO_OPTIONAL_MEMBERS
};
int _tmain(int argc, _TCHAR* argv[]) {
/* Create a JS runtime. */
JSRuntime *rt = JS_NewRuntime(16L * 1024L * 1024L);
if (rt == NULL)
return 1;
/* Create a context. */
JSContext *cx = JS_NewContext(rt, 8192);
if (cx == NULL)
return 1;
JS_SetOptions(cx, JSOPTION_VAROBJFIX);
JSScript *script;
JSObject *obj;
const char *js = "function a() { var tmp; tmp = 1 + 2; temp = temp * 2; alert(tmp); return 1; }";
obj = JS_CompileScript(cx,JS_GetGlobalObject(cx),js,strlen(js),"code.js",NULL);
script = obj->getScript();
if (script == NULL)
return JS_FALSE; /* compilation error */
js::analyze::Script *sc = new js::analyze::Script();
sc->analyze(cx,script);
JS_DestroyContext(cx);
JS_DestroyRuntime(rt);
/* Shut down the JS engine. */
JS_ShutDown();
return 1;
}
Which version of Spidermonkey are you using? I am using the one that comes with FireFox 10 so the API may be different.
You should create a new global object and initialize it by calling JS_NewCompartmentAndGlobalObject() and JS_InitStandardClasses() before compiling your script :
.....
/*
* Create the global object in a new compartment.
* You always need a global object per context.
*/
global = JS_NewCompartmentAndGlobalObject(cx, &global_class, NULL);
if (global == NULL)
return 1;
/*
* Populate the global object with the standard JavaScript
* function and object classes, such as Object, Array, Date.
*/
if (!JS_InitStandardClasses(cx, global))
return 1;
......
Note, the function JS_NewCompartmentAndGlobalObject() is obsolete now, check the latest JSAPI documentation for the version your are using. Your JS_CompileScript() call just try to retrieve a global object which has not been created and probably this causes the exception.
how about using function "SaveCompiled" ? it will save object/op-code (compiled javascript) to file
I would like to use this function to help monitor memory:
void print_free_memory ()
{
mach_port_t host_port;
mach_msg_type_number_t host_size;
vm_size_t pagesize;
host_port = mach_host_self();
host_size = sizeof(vm_statistics_data_t) / sizeof(integer_t);
host_page_size(host_port, &pagesize);
vm_statistics_data_t vm_stat;
if (host_statistics(host_port, HOST_VM_INFO, (host_info_t)&vm_stat, &host_size) != KERN_SUCCESS)
NSLog(#"Failed to fetch vm statistics");
/* Stats in bytes */
natural_t mem_used = (vm_stat.active_count +
vm_stat.inactive_count +
vm_stat.wire_count) * pagesize;
natural_t mem_free = vm_stat.free_count * pagesize;
natural_t mem_total = mem_used + mem_free;
NSLog(#"used: %u free: %u total: %u", mem_used, mem_free, mem_total);
}
A. Where do I put this function in my Xcode project?
B. How do I call it? Obviously I'd like to set up to continuously monitor memory.
A. Where do I put this function in my Xcode project?
Put the definition in a separate .c file, and a declaration in a separate header file.
PrintFreeMem.h
extern void print_free_memory();
PrintFreeMem.c
#include "PrintFreeMem.h"
void print_free_memory() {
// Your implementation
}
B. How do I call it?
You can call it the way you call regular C functions, after including its header file:
#include "PrintFreeMem.h"
-(void)myMethod {
...
print_free_memory();
}
You can do the declaration in the header file and write this function in the implementation file or you can simply put the function in the implementation file but then function can be called only from the lines below
print_free_memory ();
Hope this works
using the uvision IDE for STM32 development, I want to have some timer variables not initialized at startup. I have tried:
volatile unsigned int system_time __attribute__((section(".noinit")));
and
__attribute__((zero_init)) volatile int system_timer;
but nothing seems to work. Following the hints from elswhere, I have additionally checked NoInit at options/target/IRAM1.
Still, the variables are set to zero after reset.
Can anybody help?
You need to follow these steps.
declare your variable as follows:
volatile unsigned int system_time __attribute__((section(".noinit"),zero_init));
Then you have to use a scatter file to declare the execution section with the NOINIT attribute and use it with the linker.
example scatter file:
LR_IROM1 0x08000000 0x00080000 { ; load region size_region
ER_IROM1 0x08000000 0x00080000 { ; load address = execution address
*.o (RESET, +First)
*(InRoot$$Sections)
.ANY (+RO)
}
RW_IRAM1 0x20000000 UNINIT 0x00000100 { ;no init section
*(.noinit)
}
RW_IRAM2 0x20000100 0x0000FFF0 { ;all other rw data
.ANY(+RW +ZI)
}
}
You have to check the address of that variable from .MAP file and use the The at keyword
allows you to specify the address for uninitialized variables in your C source files. The
following example demonstrates how to locate several different variable types using the at keyword.for example......
struct link {
struct link idata *next;
char code *test;
};
struct link idata list _at_ 0x40; /* list at idata 0x40 */
char xdata text[256] _at_ 0xE000; /* array at xdata 0xE000 */
int xdata i1 _at_ 0x8000; /* int at xdata 0x8000 */
char far ftext[256] _at_ 0x02E000; /* array at xdata 0x03E000 */
void main ( void ) {
link.next = (void *) 0;
i1 = 0x1234;
text [0] = 'a';
ftext[0] = 'f';
}
I hope it helps for solving your problem.
I've got a series of OpenCv generated YAML files and would like to parse them with yaml-cpp
I'm doing okay on simple stuff, but the matrix representation is proving difficult.
# Center of table
tableCenter: !!opencv-matrix
rows: 1
cols: 2
dt: f
data: [ 240, 240]
This should map into the vector
240
240
with type float. My code looks like:
#include "yaml.h"
#include <fstream>
#include <string>
struct Matrix {
int x;
};
void operator >> (const YAML::Node& node, Matrix& matrix) {
unsigned rows;
node["rows"] >> rows;
}
int main()
{
std::ifstream fin("monsters.yaml");
YAML::Parser parser(fin);
YAML::Node doc;
Matrix m;
doc["tableCenter"] >> m;
return 0;
}
But I get
terminate called after throwing an instance of 'YAML::BadDereference'
what(): yaml-cpp: error at line 0, column 0: bad dereference
Abort trap
I searched around for some documentation for yaml-cpp, but there doesn't seem to be any, aside from a short introductory example on parsing and emitting. Unfortunately, neither of these two help in this particular circumstance.
As I understand, the !! indicate that this is a user-defined type, but I don't see with yaml-cpp how to parse that.
You have to tell yaml-cpp how to parse this type. Since C++ isn't dynamically typed, it can't detect what data type you want and create it from scratch - you have to tell it directly. Tagging a node is really only for yourself, not for the parser (it'll just faithfully store it for you).
I'm not really sure how an OpenCV matrix is stored, but if it's something like this:
class Matrix {
public:
Matrix(unsigned r, unsigned c, const std::vector<float>& d): rows(r), cols(c), data(d) { /* init */ }
Matrix(const Matrix&) { /* copy */ }
~Matrix() { /* delete */ }
Matrix& operator = (const Matrix&) { /* assign */ }
private:
unsigned rows, cols;
std::vector<float> data;
};
then you can write something like
void operator >> (const YAML::Node& node, Matrix& matrix) {
unsigned rows, cols;
std::vector<float> data;
node["rows"] >> rows;
node["cols"] >> cols;
node["data"] >> data;
matrix = Matrix(rows, cols, data);
}
Edit It appears that you're ok up until here; but you're missing the step where the parser loads the information into the YAML::Node. Instead, se it like:
std::ifstream fin("monsters.yaml");
YAML::Parser parser(fin);
YAML::Node doc;
parser.GetNextDocument(doc); // <-- this line was missing!
Matrix m;
doc["tableCenter"] >> m;
Note: I'm guessing dt: f means "data type is float". If that's the case, it'll really depend on how the Matrix class handles this. If you have a different class for each data type (or a templated class), you'll have to read that field first, and then choose which type to instantiate. (If you know it'll always be float, that'll make your life easier, of course.)