I wanted to start with the smartcard programming soon. Please help me the things required for starting the learning using javacard. Which IDE (If any IDE Supports), Software and Hardware related? Like Mobile phone simulator is there any smartcard simulator or else if I have to buy a smartcard specify those cards where and how I can get?
A general answer regarding smartcard programming is that you should be ready to navigate a confusing list of tools and technologies. Typically smartcard developers begin with a specific hardware platform in mind: more specific than simply javacard.
Since you've specifically mentioned javacard, we can focus on a few starting points.
Javacard SDK
You might begin with the javacard dev kit. I haven't used the most recent - I'm still using 2.0.2. This dev kit is very command-line oriented, so expect to be doing most of your work outside an IDE. However, the documentation is pretty helpful and should get you up to speed pretty quickly. At any rate, it's a good place to start, since it's official.
EclipseJCDE also looks interesting, but I haven't used it. I seem to recall another project aiming to build javacard Eclipse tools, but I may just be thinking of EclipseJCDE.
IBM Tools
At one time IBM published and maintained a set of JCOP tools that integrated with the Eclipse IDE. The great thing about this is that they would send you a package containing some dev tools and a couple of JCOP cards. The annoying thing is that an activation code was required. Have a look here. The download link is still good, good luck with the email address listed there. Also note that these tools require an older build of Eclipse. The build/debug support is very good, including a built-in javacard simulator.
Global Platform
If you plan on doing javacard programming, you should also get to know Global Platform. It's a smartcard standard, and in the context of javacard, you'll need to know about the GP spec when you need to load and manage javacard applets. This is required for working with JCOP cards. For the latest GP spec search for GlobalPlatform Card Specifications. You'll need to be very familiar with basic smartcard concepts, e.g. APDUs.
Hardware
Hardware choices are too varied for me to make useful recommendations, beyond the JCOP stuff above. As I mentioned, if you can use the IBM kit then you'll get a good JCOP/javacard simulator with the Eclipse tools. I'm sure there are other card simulators available.
etc.
Beyond that there is a long list of other technical specifications employed by smartcard programmers, and unfortunately many of them aren't freely available (ISO docs). If you'll be doing GSM programming, I think you can get to all of the GSM specs, search for ETSI GSM specifications. GSM 11.11 is particularly useful for learning more about APDU command/response, without access to ISO specs, e.g. ISO 7816-4.
Share two new Free tools that I am using to learn javacard here.Hope to help others get started with javacard easily.
JCIDE: It is an Integrated Development Environment designed specifically for the Java Card programming language.
PyAPDUTool: It is a handy tool which can communicate with the card via the reader connected to PC. It is a PC/SC compliant application.
Related
All,
I'm attempting to estimate the effort to port an app developed on Windows (.NET) to Linux (Mono). I came across the MoMA tool, which attempts to look through my .exe and find potential areas of incompatibility. Most of my issues appear to be centered around get/set of network settings, getting network info, etc. (Object ManagementBaseObject.get_Item and set_Item. etc).
In almost all of the cases, the Mono functionality is listed as "ToDo". For estimation purposes, is it safe to assume most/all of these have some kind of workaround? I would imagine this type of basic networking support must be included in the latest version of Mono. Or should I assume none of this is currently available and I would be stuck waiting for it to be implemented (or be forced to implement it myself)?
Thanks,
Dan
First,see Mono Compatible Networking/Socket Library. Also,take a look on Cross-Platform Network Applications with Mono. You can start with C# Network Library.
We are looking for an automated testing software for our web application. We need to come up with a solution or software that our non-it staffs could write test cases as well as the developers.
For example I've run through some of them such as: SmartBear, National Instrument and IBM. Most of these guys are MS Windows based or commercial Linux distros which remove them from our list since we are all Debian based.
Any recommendation or guideline would be much appreciated.
Ps. We don't have any budget limit!
You're going to have a hard time getting tooling for non-technical testers to build test cases if you limit yourselves to Debian OS for developing and running the tests on. There's no reason you couldn't have a few Windows system to manage your test suites from -- those would run against your web site just fine, regardless of what stack it's hosted on. That would open you up to the tools you mentioned (and Telerik's Test Studio, the tool I help promote).
Those Windows systems could easily be run via whatever virtualization host you prefer, so you wouldn't even need physical systems to deal with that. You could easily share the same source control repository as your devs, too, since nearly every decent SCM has Windows clients.
If you're unwilling to consider having a few Windows boxes around for your testing, then you'll need to have a look at getting all your testers proficient in APIs and frameworks like WebDriver and Robot Framework. The Pages gem from Jeff Morgan (#chzy) in Ruby would be another option, as would Adam Goucher's Saunter (in Python).
I am developing a Remote Software Provisioning system that should be able to handle all deployment, installation, un-installation and upgrades of software components. Software can be in any language (java, .net, c/c++ etc) and target side can be PC, embedded systems and smart phones.
I have found Apache ACE as good candidate for developing this system.
I want to know if there is any advantage/necessity of using OSGi at target side as Apache ACE can do software provisioning to non-OSGi targets as well.
Having a modular framework like OSGi at the client side is a huge advantage when doing remote management, because it gives you much insight into what's happening inside - installed bundles, dependencies, states of the bundles, available services etc. This helps a lot when you have to solve a problem remotely. Another advantage is that OSGi basically forces programmers to develop proper modular and dynamic systems, which makes (remote) updating much easier.
So, if you have to decide now what language and framework to use for the client side, I strongly recommend OSGi for the embedded and mobile clients. For the PCs (I guess you mean desktop PCs?) this is probably not the best choice - it depends a lot what you want to achieve there. If you want to install MS Office remotely OSGi won't bring you forward ;)
However, if you already have existing programs at the client side and are discussing whether to convert them to OSGi, I would recommend to investigate some time first to see whether they can be converted easily. Some software packages could give you a lot of trouble converting to OSGi, not because OSGi is complex, but because the program itself is not modular and has a lot of assumptions about the static nature of the environment (e.g. nothing ever disappears, parts of the system never get updated etc.). The irony in the matter is that these are exactly the programs which will give you most trouble later anyway no matter which remote provisioning system you chose.
If you have OSGi at some of the targets be sure to use a remote provisioning system which gives you access to the full OSGi functionality and not only the most basic and simple install and update functions. I haven't yet used Apache ACE, but I have experience with another provisioning system - mPower Remote Manager. Here are some snapshots from the documentation which can give you a feeling what is possible with OSGi as a base - you can draw your own conclusions whether it will be useful for your case or not.
I've given some examples in the other question you asked:
What are the non-osgi targets with which Apache ACE can work
You can write your own management agent that talks to the ACE server and installs artifacts. There actually are a couple of places where you could hook in your own code and protocol. Is there a concrete language/environment you're thinking of using, or are you just exploring the possibilities right now?
Well, the advantages of OSGi haven't changed, so for that I can refer you to the standard page.
To be a bit more constructive, I'll read the question as 'Should I bother converting my application to OSGi, as it is not necessary for ACE?'
I think that depends on what 'kind' of updating mechanism you're after. If you have a monolithical application (at least from the provisioning perspective) which you deploy and update only as a whole (Like an iOS app) then there isn't much to gain for provisioning purposes by using OSGi.
For the rest I can tell you the same as I tell anybody else: Converting an application to OSGi isn't hard, but modularizing code can be a nightmare, but something you'll need to face at some point, OSGi or not. If your code is modularized already, using OSGi should be a piece of cake.
I'm writing software for a new hardware device which I want any kind of new third-party application to be able to access if they want to.
The software will be a native process (C++) that should be pollable by 3rd party games and applications that want to support the hardware device. Those 3rd party apps should also be able to receive events from the native process, on a subscribe basis. So aside from the native process, I'll also supply "connector" libraries to the 3rd party developers, for all platforms/languages that they might choose (Java, C++, Python etc.) to embed in their apps so they can easily connect to the device with hardly any extra code needing to be written by them. I want to target all desktop/laptop OS platforms, and have a pretty good idea of what functions I want to expose, but ideally I don't want to be too stuck (i.e. I want it to be elegantly scalable from both client and server perspectives).
I'm looking for reliability going forward, performance, maintainability going forward, and cross-platform/language flexibility going forward, and ease of development, in that order.
What should I use?
CORBA, MessagePack-RPC, Thrift, or something else entirely?
(I've omitted ICE because of it's licensing)
Thrift or Message Pack is the best option going forward. Both are sleek, light weight and do not add much latencies to your process. They have support for most of the common languages, and are in Active Development. At the current stage I would prefer thrift personally but message pack does seem to promise a lot of features.
Thought thrift might not be as windows friendly as we want but people are using it on windows.
This is a starter guide for thrift on windows.
http://wiki.apache.org/thrift/ThriftInstallationWin32
Only installing and getting the Thrift compiler can be troublesome on windows. Using the generated files depend on the language you choose and lot of the languages have good support to run the files by importing thrift libraries. (Java it is very easy, MAVEN artifact)
There is a discussion on the RPC frameworks available at RPC frameworks available?
CORBA according to me is old cumbersome and very heavyweight.
If ancient and heavyweight don't put you off, obsolete definitely should. Regardless, I can tell you what we've been using Google Protocol Buffers at work recently, and they're pretty easy to use.
From the developer's perspective, all you need to do is have a build of GPB (which really isn't that difficult), and then it will generate source files for you. The end result is a cross-platform binary message transport message passing interface (think XML and limited RMI, not MPI-like functionality).
We use it on Windows to talk to an Arm-based Linux system (TS-7200's from embedded arm) running the same software. to my knowledge, it is compatible with many languages.
CORBA is the only free "RPC" thing that would work for my system right now, even though it scales very badly. Thrift isn't Windows-friendly yet. Neither is MessagePack-RPC yet available in all languages and OSs, even though it's still in development. If CORBA was elegantly scalable it probably wouldn't have become obsolete at all.
Protocol Buffers and messaging would work, I'd have to develop a both a client and service implementation for every platform/language. It would also be very scalable. I've decided on this.
I'm currently using Apache Thrift for a Hospital Manager project. It is better than CORBA in many areas, not to mention it is lightweight and much easier to implement and understand. The learning curve for Thrift is definitely subtle compared to CORBA, but the documentation for Thrift is the worst thing.
I'm using a Ruby Thrift server to which Obj-C and Java clients connect. The Thrift parser or "compiler" does a pretty good job generating source files for the languages you want, although it is far too verbose. I would definitely look into implementing Thrift, or Google ProtoBuffs if I was starting a new project, since CORBA is really outdated, and might not implement new technologies in the future, not to mention that there are many vulnerabilities and exploits targeting CORBA that will not get patched since it's not in development anymore, presenting some serious security holes on your new project.
Thrift supports many programming languages: C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Objective-C, JavaScript, Node.js, Smalltalk, OCaml and Delphi as of this writing. Supporting multiple languages is key, I think, for the purpose of your project.
I seem to end up evaluating a lot of software. This requires me to constantly install all kinds of things on my system. It creates a huge clutter and I spend a lot of time during the install process, and if I don't like it, then removing everything I've done. Much of my evaluation tends away from the features of the software being evaluated and toward how difficult it is to install. I'm sure I miss good software which may have actually been a better choice, because of this startup cost.
With the advent of VM software like VMWare Player and VirtualBox, it would be much easier to sell someone like me your software, if you just provided an image that I could load into the VM and run. I'd be looking at the features almost immediately rather than fighting with which revision of whatever. The VM would take care of all of this for me.
Am I missing something, or should vendors and OSS start distributing VMs for their wares?
Most of my evaluations are for server side software installed on Linux, so OS licensing is not the issue.
VMs require that the operating system have a valid license key. For free operating systems this wouldn't be an issue, but if you're developing for something like Windows machines, each time they send out a demo version of their software, they're sending out a license key that they would have to pay for.
This would be incredibly expensive for most companies.
The only downside I would say IMHO is the size of the images, if say you have a 20 MB application, do you really want to download/transfer an entire OS just for that application.
I would say a better approach would be to have a ready to go VM and then you simply take a snapshot (on Virtual Box, I assume similar feature exist in other players)
Then simply install the applciation inside your sandbox environment, and then just Zap it when done (i.e. return to your Snapshot)
Darknight
This can be done for softwre that runs on open source platforms, and VMware have a library of images which do just this (though the images that are used for evaluating commercial software is generally for infrastructure-type things that have very, very complex installation requirements):
http://www.vmware.com/appliances/
However, if the software is for the Windows platform, you don't really have the opportunity to do this, as Microsoft's Windows licensing would prevent it. Unless, you're Microsoft, of course, in which case you can in fact do this - and MS has done this to permit easier evaluation of such software as Visual Studio, SQL, and many others:
http://technet.microsoft.com/en-us/bb738372.aspx?ppud=4
Novell has an appliance builder called Suse Studio that lets you pick the software you want, it builds out a VM with the software (and dependencies, etc) for you. You can then try out the VM, download it, etc.
Whether the software you're looking for is available or not is a different matter.
Disclaimer: I work for Novell (though not with the Suse team)
But yes, if you can deal with the OS licensing issues, or possibly host trial environments yourself, this is a very effective way for a vendor to demo their app. The problem is that all vendors don't always have the infrastructure (or lack the awareness) to do so.
Microsoft provides fully-provisioned VM's for time-limited trials of their software. So if you want to trial select Microsoft products in that manner, you can do that today.
There is no sign, though, that Microsoft will make this available to third party Windows software vendors.
In the SaaS (Software-as-a-Service) world, you can get fully-provisioned virtual servers that include Windows and your software of interest on a pay-as-you-go basis, based on both Linux and Windows. For example, see Amazon Web Services
For windows, you may be better off developing a portable application that runs from a usb key. That is how Embarcadero distribute All Access. I received a 4 gb usb key that contained multiple applications. Most could be run straight from the key without installation. I believe Embarcadero will be licensing the technology at some stage.
If you are using a programming language such as Delphi or C++ with little in the way of external dependencies, a portable application is straight forward to develop. For .net, it is much harder, but can be done with Mono, or something like Virtual Application Studio.