Can boringssl work on ARMv8 bare metal platform? I tried build boringssl with aarch64-elf-gcc, but it refused to build.
If it does, any porting guide or suggestions?
Probably not out of the box. But you should probably not even try using it, mainly because, according to Google itself, it is not intended for general use.
This is never good to be on your own when using a library, more specifically a cryptographic one. This is usually synonym for no bug fixes, no support, no user forums among other things.
You could rather consider a library that was designed for this purpose, such as mbedtls (formerly known as PolarSSL).
It is being used on a wide range of systems, from bare-metal systems (FreeRTOS) to Linux (The Hiawhata web server does use it for example).
Update: Even if support for Armv8-a hardware crypto extensions is needed, you could still reuse BoringSSL Armv8-a optimized routines (ISC license) or the Cavium armv8_crypto library (BSD license), to replace mbedtls (Apache 2.0 lisense) equivalent routines: cryptographic functions usually have clean and small interfaces.
From my experience, this may still be faster than porting a library targeting a general purpose operating system if your target is a bare-metal one, but you ultimately have to evaluate the costs for both options in your specific case.
My guess would be that there is far less work involved for adding support for Armv8-a crypto extensions to mbedtls using already existing, supported code under the proper license, than attempting to strip-down openssl or boringssl for use on a bare-metal target.
There is a very good piece of documentation explaining how to add support for hardware-accelerated crypto to mbedtls here, this may help you evaluating your options.
Related
All,
I'm attempting to estimate the effort to port an app developed on Windows (.NET) to Linux (Mono). I came across the MoMA tool, which attempts to look through my .exe and find potential areas of incompatibility. Most of my issues appear to be centered around get/set of network settings, getting network info, etc. (Object ManagementBaseObject.get_Item and set_Item. etc).
In almost all of the cases, the Mono functionality is listed as "ToDo". For estimation purposes, is it safe to assume most/all of these have some kind of workaround? I would imagine this type of basic networking support must be included in the latest version of Mono. Or should I assume none of this is currently available and I would be stuck waiting for it to be implemented (or be forced to implement it myself)?
Thanks,
Dan
First,see Mono Compatible Networking/Socket Library. Also,take a look on Cross-Platform Network Applications with Mono. You can start with C# Network Library.
I am developing a Remote Software Provisioning system that should be able to handle all deployment, installation, un-installation and upgrades of software components. Software can be in any language (java, .net, c/c++ etc) and target side can be PC, embedded systems and smart phones.
I have found Apache ACE as good candidate for developing this system.
I want to know if there is any advantage/necessity of using OSGi at target side as Apache ACE can do software provisioning to non-OSGi targets as well.
Having a modular framework like OSGi at the client side is a huge advantage when doing remote management, because it gives you much insight into what's happening inside - installed bundles, dependencies, states of the bundles, available services etc. This helps a lot when you have to solve a problem remotely. Another advantage is that OSGi basically forces programmers to develop proper modular and dynamic systems, which makes (remote) updating much easier.
So, if you have to decide now what language and framework to use for the client side, I strongly recommend OSGi for the embedded and mobile clients. For the PCs (I guess you mean desktop PCs?) this is probably not the best choice - it depends a lot what you want to achieve there. If you want to install MS Office remotely OSGi won't bring you forward ;)
However, if you already have existing programs at the client side and are discussing whether to convert them to OSGi, I would recommend to investigate some time first to see whether they can be converted easily. Some software packages could give you a lot of trouble converting to OSGi, not because OSGi is complex, but because the program itself is not modular and has a lot of assumptions about the static nature of the environment (e.g. nothing ever disappears, parts of the system never get updated etc.). The irony in the matter is that these are exactly the programs which will give you most trouble later anyway no matter which remote provisioning system you chose.
If you have OSGi at some of the targets be sure to use a remote provisioning system which gives you access to the full OSGi functionality and not only the most basic and simple install and update functions. I haven't yet used Apache ACE, but I have experience with another provisioning system - mPower Remote Manager. Here are some snapshots from the documentation which can give you a feeling what is possible with OSGi as a base - you can draw your own conclusions whether it will be useful for your case or not.
I've given some examples in the other question you asked:
What are the non-osgi targets with which Apache ACE can work
You can write your own management agent that talks to the ACE server and installs artifacts. There actually are a couple of places where you could hook in your own code and protocol. Is there a concrete language/environment you're thinking of using, or are you just exploring the possibilities right now?
Well, the advantages of OSGi haven't changed, so for that I can refer you to the standard page.
To be a bit more constructive, I'll read the question as 'Should I bother converting my application to OSGi, as it is not necessary for ACE?'
I think that depends on what 'kind' of updating mechanism you're after. If you have a monolithical application (at least from the provisioning perspective) which you deploy and update only as a whole (Like an iOS app) then there isn't much to gain for provisioning purposes by using OSGi.
For the rest I can tell you the same as I tell anybody else: Converting an application to OSGi isn't hard, but modularizing code can be a nightmare, but something you'll need to face at some point, OSGi or not. If your code is modularized already, using OSGi should be a piece of cake.
I saw that the Clang 3.0 port includes Objective-C as a development language, and furthermore, I also found this port "libobjc2-1.6" (Replacement Objective-C runtime supporting Obj-C 2 features) and "ofc-0.8.1_5" (The Objective-C Foundation Classes library).
Let's say we are considering to use Objective-C on FreeBSD to develop a web-based application (vs. using Java and running it on Tomcat/Glassfish), how do we approach it?
Does Objective-C development actually work on FreeBSD (9.0)?
What are the things (frameworks/library) to download and install?
What IDE?
As I mentioned that let's say we intend to develop a web application, what are the library?/libraries (We also saw that there is "GNUstepWeb" - successor to WebObjects - is this the web library we should consider? Is this the ONLY ONE - what about other alternatives? Further, can GNUstep/GNUstepWeb compile under Clang 3.0 or make use of those Objective-C ports ("libobjc2-1.6" and "ofc-0.8.1_5") mentioned above? Are those ports relavant?
Has anyone successfully done a web application project development on FreeBSD using Objective-C (and deployed on FreeBSD)?
Note: Web-based applications means it takes in HTTP (RESTful) calls and talks to a database (for traditional and/or NoSQL databases).
There is http://cocotron.org, a port (more like re-write) of Apple's runtime for Objective-C.
I still could advice against using ObjC for web stack. I did that previously, and I must say that it involves a big chunk of pretty common code that you will need to implement for basic HTTP server functionality.
Also, Cocotron is not really that fast (as a runtime). It's ok for desktop applications, but web world is much more restrictive.
I am writing a library supporting this using FastCGI to interface the server called CGIKit (https://github.com/xcvista/CGIKit) and it works on GNUstep instead of Cocoatron.
you may look at sope and sogo http://sope.opengroupware.org/en/build/thirdparty.html
Someone seems having success building Objective-C program for FreeBSD 9.x
You don't need to worry about the IDE if you don't mind using Apple. It would be possible to write on Mac, and run on FreeBSD. (personally I think this is the best of both world) IMO, if there's a server OS with Objective-C ready, FreeBSD will be first one.
More serious problem is libraries and frameworks. We don't have much options in Objective-C for web server development even on OSX. But we can wrap existing C/C++ libraries, (just as like many great node.js, Python, Ruby libraries do) and I think we may can get bunch of options with small efforts.
Some people worry about security. And I always wonder how many foundational programs on the network are written in C/C++ and other languages.
In his blog post “Using Objective-C on the server” Graham Lee describes how to set up a minimal GNUStep-WebApp. Obviously, the build instruction for GNUstep-make would differ, but other than that this seems like a nice starting point.
He wrote several other posts (jQuery, AJAX) further exploring GSW.
I wanted to start with the smartcard programming soon. Please help me the things required for starting the learning using javacard. Which IDE (If any IDE Supports), Software and Hardware related? Like Mobile phone simulator is there any smartcard simulator or else if I have to buy a smartcard specify those cards where and how I can get?
A general answer regarding smartcard programming is that you should be ready to navigate a confusing list of tools and technologies. Typically smartcard developers begin with a specific hardware platform in mind: more specific than simply javacard.
Since you've specifically mentioned javacard, we can focus on a few starting points.
Javacard SDK
You might begin with the javacard dev kit. I haven't used the most recent - I'm still using 2.0.2. This dev kit is very command-line oriented, so expect to be doing most of your work outside an IDE. However, the documentation is pretty helpful and should get you up to speed pretty quickly. At any rate, it's a good place to start, since it's official.
EclipseJCDE also looks interesting, but I haven't used it. I seem to recall another project aiming to build javacard Eclipse tools, but I may just be thinking of EclipseJCDE.
IBM Tools
At one time IBM published and maintained a set of JCOP tools that integrated with the Eclipse IDE. The great thing about this is that they would send you a package containing some dev tools and a couple of JCOP cards. The annoying thing is that an activation code was required. Have a look here. The download link is still good, good luck with the email address listed there. Also note that these tools require an older build of Eclipse. The build/debug support is very good, including a built-in javacard simulator.
Global Platform
If you plan on doing javacard programming, you should also get to know Global Platform. It's a smartcard standard, and in the context of javacard, you'll need to know about the GP spec when you need to load and manage javacard applets. This is required for working with JCOP cards. For the latest GP spec search for GlobalPlatform Card Specifications. You'll need to be very familiar with basic smartcard concepts, e.g. APDUs.
Hardware
Hardware choices are too varied for me to make useful recommendations, beyond the JCOP stuff above. As I mentioned, if you can use the IBM kit then you'll get a good JCOP/javacard simulator with the Eclipse tools. I'm sure there are other card simulators available.
etc.
Beyond that there is a long list of other technical specifications employed by smartcard programmers, and unfortunately many of them aren't freely available (ISO docs). If you'll be doing GSM programming, I think you can get to all of the GSM specs, search for ETSI GSM specifications. GSM 11.11 is particularly useful for learning more about APDU command/response, without access to ISO specs, e.g. ISO 7816-4.
Share two new Free tools that I am using to learn javacard here.Hope to help others get started with javacard easily.
JCIDE: It is an Integrated Development Environment designed specifically for the Java Card programming language.
PyAPDUTool: It is a handy tool which can communicate with the card via the reader connected to PC. It is a PC/SC compliant application.
I'm writing software for a new hardware device which I want any kind of new third-party application to be able to access if they want to.
The software will be a native process (C++) that should be pollable by 3rd party games and applications that want to support the hardware device. Those 3rd party apps should also be able to receive events from the native process, on a subscribe basis. So aside from the native process, I'll also supply "connector" libraries to the 3rd party developers, for all platforms/languages that they might choose (Java, C++, Python etc.) to embed in their apps so they can easily connect to the device with hardly any extra code needing to be written by them. I want to target all desktop/laptop OS platforms, and have a pretty good idea of what functions I want to expose, but ideally I don't want to be too stuck (i.e. I want it to be elegantly scalable from both client and server perspectives).
I'm looking for reliability going forward, performance, maintainability going forward, and cross-platform/language flexibility going forward, and ease of development, in that order.
What should I use?
CORBA, MessagePack-RPC, Thrift, or something else entirely?
(I've omitted ICE because of it's licensing)
Thrift or Message Pack is the best option going forward. Both are sleek, light weight and do not add much latencies to your process. They have support for most of the common languages, and are in Active Development. At the current stage I would prefer thrift personally but message pack does seem to promise a lot of features.
Thought thrift might not be as windows friendly as we want but people are using it on windows.
This is a starter guide for thrift on windows.
http://wiki.apache.org/thrift/ThriftInstallationWin32
Only installing and getting the Thrift compiler can be troublesome on windows. Using the generated files depend on the language you choose and lot of the languages have good support to run the files by importing thrift libraries. (Java it is very easy, MAVEN artifact)
There is a discussion on the RPC frameworks available at RPC frameworks available?
CORBA according to me is old cumbersome and very heavyweight.
If ancient and heavyweight don't put you off, obsolete definitely should. Regardless, I can tell you what we've been using Google Protocol Buffers at work recently, and they're pretty easy to use.
From the developer's perspective, all you need to do is have a build of GPB (which really isn't that difficult), and then it will generate source files for you. The end result is a cross-platform binary message transport message passing interface (think XML and limited RMI, not MPI-like functionality).
We use it on Windows to talk to an Arm-based Linux system (TS-7200's from embedded arm) running the same software. to my knowledge, it is compatible with many languages.
CORBA is the only free "RPC" thing that would work for my system right now, even though it scales very badly. Thrift isn't Windows-friendly yet. Neither is MessagePack-RPC yet available in all languages and OSs, even though it's still in development. If CORBA was elegantly scalable it probably wouldn't have become obsolete at all.
Protocol Buffers and messaging would work, I'd have to develop a both a client and service implementation for every platform/language. It would also be very scalable. I've decided on this.
I'm currently using Apache Thrift for a Hospital Manager project. It is better than CORBA in many areas, not to mention it is lightweight and much easier to implement and understand. The learning curve for Thrift is definitely subtle compared to CORBA, but the documentation for Thrift is the worst thing.
I'm using a Ruby Thrift server to which Obj-C and Java clients connect. The Thrift parser or "compiler" does a pretty good job generating source files for the languages you want, although it is far too verbose. I would definitely look into implementing Thrift, or Google ProtoBuffs if I was starting a new project, since CORBA is really outdated, and might not implement new technologies in the future, not to mention that there are many vulnerabilities and exploits targeting CORBA that will not get patched since it's not in development anymore, presenting some serious security holes on your new project.
Thrift supports many programming languages: C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Objective-C, JavaScript, Node.js, Smalltalk, OCaml and Delphi as of this writing. Supporting multiple languages is key, I think, for the purpose of your project.