As per my knowledge Rational is a testing tool and Rational Rose is a modelling tool.
Rational is a company that makes tools for software development life cycle. Rational Rose is one such tool built by Rational.
Rational is now part of IBM. See www.rational.com.
Rational Software (a company that was acquired by IBM about 7 years ago) and remains a brand in the IBM Software Group. The Rational portfolio is quite broad concentrating on overall Application Lifecycle Management and System Management solutions. Testing, Quality Management, Configuration Management, Mobile Development, Requirements Management, Change Management, Agile Development, Mainframe Development, Enterprise Modernization and much more.
There is IBM Rational Test Real Time , and it is used for module and unit tests , as well as performance and memory profiling . it is mainly a testing tool . know nothing about rational Rose .
Related
I want to make a machine for coconut tree maintanence, comprising of
1. Hydraulic/knumatic lift
2. Electrical cutting machine
3. Small pressing pliers
4. Remote controlling system
Process:
Note: Consider all the above mentioned units are attached on top of the lift.
Lifting the lift and start cutting the coconut leafs
Applying pressure in some part of the tree's head
I plan to do these entire process through automation. I need some ideas about automation tools/products suit for this automation.
I am good in java and similar programming concepts. I have some knowledg on plc/aurdiono by browsing.
Thanks in advance.
You can use any PLC or PC based control. Since you have Java knowledge you can try "PLC Open" controllers with ST (Structured Text) programming language.
Beckhoff TwinCAT 3 has a good implementation for that matter. TwinCAT 3 is OOP language, but is more similar to C++ than to Java. It also connects to every EtherCAT enabled device. Keep in mind that it will probably be more expensive then cheap stand-alone PLC supporting only Ladder Diagram and own hardware.
EtherCAT (remote) I/O will operate same on top of lift as near the controller (PC).
To design the device you described you will need suitable hardware and
electrical components as I/O units, contactors, valves, sensors, etc...
Safety is very important when you have heavy moving parts and hydraulic pressure. Beckhoff TwinCAT 3 has safety module but it is only programmed as functional diagram. Per my experience safety should be the first thing in mind.
We are working on a medical devices test system and as part of the system we must unit test the code.
NI is an ISO 9000 company and as such I had assumed that LabVIEW was standardized but it appears it is not, from wikipedia https://en.wikipedia.org/wiki/Comparison_of_programming_languages
Do I now need to unit test every function I am going to use?
Is there a recommended path for developing medical device systems in an ISO 13485 manner?
National Instruments is indeed ISO-9001 certified. This has nothing to do with LabVIEW (the G language for that matter) to be standardized it is their proprietary and they do not have to standardize it in any way.
The level of testing of your code depends heavily on the safety class of the software. There are three safety classes A, B and C as per IEC-62304.
IEC-62304 will in great detail outline the level of effort that you must take to test your code.
Safety class
C - is the most stringent and will require the most effort & documentation.
If LabVIEW is used in order to test the validity the medical device code, and is not part of the medical device itself I do not believe you need to provide any additional effort in testing the test code.
But again the level of effort will depend heavily on the safety classification of code under test.
I am making a SRS and as per the research that I have done on Non Functional Requirements "Browser compatibility" testing comes in NFR . Please explain why we take "Browser compatibility" in NFR
You can read this link you can under stand, For functional testing we test each and every functionality(how the product should behave),..in non functional testing(HOW THE APPLICATION IS WORKING) we test load,stress...so its comes under NFR.
http://www.softwaretestinghelp.com/best-cross-browser-testing-tools-to-ease-your-browser-compatibility-testing-efforts/
http://www.guru99.com/compatibility-testing.html
Initial phase of compatibility testing is to define the set of environments or platforms the application is expected to work on.
Tester should have enough knowledge on the platforms / software / hardware to understand the expected application behavior under different configurations.
Environment needs to be set-up for testing with different platforms, devices, networks to check whether your application runs well under different configurations.
Report the bugs .Fix the defects. Re-test to confirm defect fixing.
Functional requirement is about how the product should behave. it is about what is the expected output for a given set of initial conditions and actions. And we functional requirement takes on business view on it. If you are building a software to run a dental office, functional requirements are going to be about adding a patient, taking appointments etc.
Non-functional Requirements on the other end is not going to be about the "business behaviour" but more about the platform on which the software will run, the ergonomic of the product or the performance (although for performance, it can become sort of "functional" if the soft is useless above a certain response time)
Back to Browser compatibility, this is not about the behaviour of the product. For our dental office example, the dentist does not really care if it will run correctly on Chrome or Firefox. That is not what he is looking for to run his business. Nevertheless, if your implementation or test conclude that the soft runs ok only on Chrome, then you will have to advice use this browser. But this has nothing to do with the functions of the products.
http://www.1stwebdesigner.com/design/tools-browser-compatibility-check/
Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements:
Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.)..
Bandwidth handling capacity of networking hardware
Compatibility of peripherals (Printer, DVD drive, etc.)
Operating systems (Linux, Windows, Mac etc.)
Database (Oracle, SQL Server, MySQL, etc.)
Other System Software (Web server, networking/ messaging tool, etc.)
Browser compatibility (Chrome, Firefox, Netscape, Internet Explorer, Safari, etc.)
Where do they differ?
What are the advantages of choosing libfreenect or OpenNI+SensorKinect, for example, over the Official SDK, and vice-versa?
What are the disadvantages?
Please note that the below answer is per date and some facts may very well be outdated in the near future. Current state of the Official Kinect SDK is beta 1.00.12.
The first obvious difference is that the official SDK is maintained by the Microsoft Research team while OpenKinect is an open source SDK maintained by the open source community. Both has its cons and pros.
The Official SDK is developed by Microsoft which also develops the hardware and therefore should know internal information about the device that the open source society must reverse engineer. Obviously this is to Microsoft's advantage.
Microsoft is pouring a lot of money into this device and I am sure that they will do what they feel is necessary to keep their SDK up to par. Having economy behind it gives many advantages.
On the other hand, never underestimate the force of the open source society: "The OpenKinect community consists of over 2000 members contributing their time and code to the Project. Our members have joined this Project with the mission of creating the best possible suite of applications for the Kinect. OpenKinect is a true "open source" community!" - http://openkinect.org/wiki/Main_Page.
OpenKinect was released long before the official SDK as the kinect device was hacked on the first or second day of its release. Kudos to OpenKinect!
Programming languages supported:
Official SDK: C++, C#, or Visual Basic by using Microsoft Visual Studio 2010.
OpenKinect: Python, C, C++, C#, Java, Lisp and more! Obviously not requiring Visual Studio.
Operating systems support:
Official SDK: only installs on Windows 7.
OpenKinect: runs on Linux, OS X and Windows
Clearly advantage OpenKinect.
License:
The Official SDK is in its current beta state only for testing. The SDK has been developed specifically to encourage wide exploration and experimentation by academic, research and enthusiast communities. commercial applications are not permitted. Note however that this will probably change in future releases of the SDK. Visit the FAQ for more information
OpenKinect appers to be open for commercial usage, but online sources state that it may not be that simple. I would take a good look at the terms before releasing any commercial apps with it. Read Kinect – Licensing implications of open hardware projects for more info.
Documentation and support:
Official SDK: well documented and provides a support forum
OpenKinect: appears to have a mailing list, twitter and irc. but no official forum/QA? Documentation on website is not as rich as I would like it to be.
Device calibration:
Different Kinect devices may differ slightly depending on the batch that they were produced in. Thus device calibration is sometimes required. But:
the Official SDK does not provide any calibration settings but I have so far not had to calibrate the device I am working on. According to something I read online (link lost) at production time the calibration parameters are written to the kinect device, so with the Official SDK calibration is not needed.
OpenKinect features device calibration: http://openkinect.org/wiki/Calibration. Thus I believe that you should calibrate your device if you go with OpenKinect.
If its true that calibration is only needed for OpenKinect that is a big advantage for the official SDK as it is easier to distribute and install applications without such need.
Personally, after a failed try with the OpenKinect SDK I went with the official SDK, which
came with drivers that installed out of the box
came with examples and code for easy getting into business
All-in-all: I could start my own development within 15 minutes or so.
Now, after working with the Kinect for a few months, I have to say that I am quite satisfied with the API provided. I cannot however compare it to the OpenKinect SDK as I in fact never got it working (but perhaps it didn't give it a fair try).
UPDATE: As of February 1st 2012 there is a commercial license for the official SDK:
"The commercial license for this release authorizes development and distribution of commercial applications. The prior SDK was a beta, and as a result was appropriate only for research, testing and experimentation, and was not suitable for use with a final, commercial product. The new license will enable developers to create and sell their Kinect for Windows applications to end user customers using Kinect for Windows hardware on Windows platforms."
Developer Frequently Asked Questions
As explained by Avada Kedavra in his/her answer, these are some interesting differences:
supported operating systems: you can only use Microsoft SDK on Windows, while open source solutions are usually able to work on other operating systems;
programming languages: you have a wider choice with open source solutions, while Microsoft only supports C++ and C# (Visual Basic is no more supported with SDK 2.0);
documentation and support: Microsoft offer a good forum and a well done documentation (with a lot of samples); but there are several open source solution well documented;
license: Microsoft is less or more proprietary, open source is less or more free. Consider also that open source ideas have sometimes been bought by big companies, and transformed in something that is no more open. Probably yours will not be the case, but keep in mind this additional eventuality.
In my personal opinion, the most significant difference between open source solutions and Microsoft SDKs is strictly related to the skeletal tracking algorithm.
While depth and RGB data can be effectively provided by both open/free APIs and Microsoft SDKs, implementing skeletal tracking capabilities is not only a matter of reverse engineering.
To implement such an algorithm, developers must have strong competences in pattern recognition and machine learning areas, and I am quite sure that such kind of knowledge is available among the open source community. But the implementation of skeletal tracking is based on a "trained" algorithm, that requires a lot of experiments to collect very large amount of data. These data are then used to "train" the algorithm, that can recognize the skeletal joints.
Getting enough data, but also adjusting and properly using them, requires a lot of time and money. Microsoft researchers and developers are in the best conditions to work on this kind of stuff, simply because it is their job.
In my previous experiences, I noticed that open source solutions provide good skeletal tracking capabilities, but they are not at the same level of what Microsoft offers with its SDK.
Remember also that Microsoft SDK provide a lot of additional capabilities, like facial recognition or joint orientation, and several widgets very useful if you want to fastly build a gestural GUI.
So what I suggest is: if you are working on a project in which you simply need depth and/or RGB data, or if you have the necessity to use a programming language that is not supported by Microsoft SDK, then you should opt for open source solution. Otherwise, Microsoft SDK would be my best choice.
I would strongly recommend the Cinder framework. (libcinder.org)
It supports both OpenNI and Kinect develoment, if you're using C++. It now supports Kinect SDK 1.7 and OpenNI 2, via these Cinderblocks:
MS Kinect SDK 1.7 (stable)
https://github.com/BanTheRewind/Cinder-MsKinect
OpenNI 2 / NITE 2.2 (alpha)
https://github.com/wieden-kennedy/Cinder-OpenNI
Both can do skeletal tracking out of the boz, OpenNI being capable of tracking up to six skeletons simultaneously. OpenNI 2 is gaining rapidly on the Kinect, although the new Kinect will probably change that when it comes out next month. However the basic underlying principles are unlikely to change.
The main drawback with the initial release of OpenNI was that it required a full body activation pose to recognise a user, which was a deal breaker for a lot of applications - however this seems to have been solved in the newer versions and OpenNI 2 also supports robust hand tracking at close range, although it still requires a focus gesture initially. If you work on Mac or Linux, it's pretty much your only choice.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I would like to develop a mobile application that is able to access all the features of the mobile device it runs on (camera, files, phone and network connectivity). I intend to build a series of applications that each have a specific function to perform, rather than a single application with a large feature set. My programming background is C, C# and web applications.
What would be the best tool set to use to do this? I have looked at using the NetBeansIDE to create a Java ME applications using LWUIT - this looks promising, but what are the caveat's?
I want to target the largest universe of mobile devices possible.
J2ME is the way to go to reach the masses, in the consumer or the business market. From a consumer standpoint, most of the world's mobile phones support J2ME. From a business standpoint, most of the world's smart phones support J2ME.
Nokia owns a 40% share of the smart phone market (and the whole market) worldwide. Next in line is Blackberry with a 13% share. Both have standard implementations of J2ME on their devices (though Blackberry also has a proprietary version of Java as well). On top of this, most devices that run Windows Mobile come with a JVM as well (I developed a game recently that initially targetted the Sony Ericsson W810i and it ran flawlessly on my HTC Tilt's JVM). Add in the fact that Android has a Java SDK and the only segment you are really missing out on are the BREW only phones and the iPhone.
I'm not a huge backer of J2ME. I just know that every other mobile platform has disappeared from my life over the last 3 years as it just makes financial sense for companies to only target the J2ME segment of the industry.
You are facing the usual main issue of mobile development : targetting as many handsets as possible with only one programming language means using J2ME, which doesn't quite give you access to all the features of the handset.
Most open handsets will support J2ME but different phone manufacturers implement it in different ways and fragmentation is enourmous accross the board. Unfortunately, the majority of open handsets (the ones where you can install third-party applications) only allow you to develop in J2ME
The only good news is that your intent to only write small applications will provide large relief from fragmentation issues.
J2ME also has huge limitations in terms of file system access, complete lack of a telephony API, very poor interaction with the system applications management...
In order to get full features, you always need to use the native technology of the open platform you are targetting, be it Android, iPhone, the several variants of Symbian OS, Brew, Windows Mobile or Palm OS handsets. Each of these has its own native technology.
Writing your application many times in many different languages in the costly price of wanting both a large number of targetted handsets and access to the full features of each of them.
I'm a Symbian/J2ME veteran myself and, given your stated background and goals, I suspect that you are trying to learn about mobile technologies. I'll therefore shamelessly plug my book, which is meant as an introduction into the Symbian development ecosystem :
http://www.amazon.com/Quick-Recipes-Symbian-Smartphone-Development/dp/0470997834/
Good luck
If your primary goal is to reach the consumer market, Java 2 Micro Edition (J2ME) is probably your best bet. All popular mobile phones (from Nokia, Sony Ericsson, Samsung) come with a Java Virtual Machine installed. If you look at Google, for example, they are developing all of their mobile applications (like Gmail and Google Maps) in Java.
If you instead are targeting business customers, the Microsoft .NET Compact Framework is the way to go in my opinion. The Windows Mobile operating system holds a strong position in the business market, mainly because of Outlook Mobile and its integration with Exchange.
I respectfully disagree with Fostah.
If you want to reach the masses, the Web is your best bet. It's far easier to write a simple Web application that will work on millions of devices.
And the best bit is that you can easily update your application and improve the user experience everyday with the Web.