CAN-bus bootloader standards - embedded

I'm developping an open source OTA update system for a few MCUs of a certain project. I wonder if there is some "standard" protocol for CAN-bus based bootloaders. Everything I saw online and in Application Notes from the chip manufacturers seem to be using their own brand of communication and thus their own specialized upload software too (mainly for demonstration for ANs).
My question is, am I missing something? Is there some standard way of doing this I'd rather adhere to, or should I just roll my own like they do and call it a day?
Features I'm interested in for the protocol side besides the obvious ones: checksumming, digital signatures, authenticated encryption.

Based on your tag, despite I do not see this from your question, I assume for now that you want to develop a boot-loader for automotive ECUs, which have a CAN connection.
The relevant protocols, which provide the services, are ISO 14229-3 or SAE J1939/73, with the first one much more common to my experience.
For development purposes, also ASAM MCD-1 XCP has support for that.
However, these are just the communication services and does not include usual usage patterns, which differ a lot across the OEMs.
For security, the German OEMs put a document together called "HIS Security. Module Specification", which I unfortunately did not find any more on the web.
They also have a blueprint for the design of a boot-loader.
However, this is anyway somewhat outdated, as boot-loaders today often are at least partially based on AUTOSAR, like the applications.
Last from them, you could also get a document partially specifying how the services above are used for flashing an ECU.
If you need further input, feel free to ask.
However, you will need yourself access to the non-free industry standards and recommendations.

Related

How to address multi-vendor ATM support in a Windows application

After reading about the CEN/XFS programming reference I thought it would be "easy" to write ATM software that will be supported in all ATMs. At first view, the whole standard seems reasonable to me in terms of portability.
However, to my great surprise, I have had access to some ATMs from well known vendors that do not have even the Microsoft XFS manager (msxfs.dll, etc.) installed. I thought this would be a very rare case.
I have been told that some vendors have their own XFS manager. Is it true? I thought JXFS or a vendor specific layer would depend on the CEN/XFS manager under the hood.
If so, do I have to be aware of all vendor dependant APIs? I refuse to believe this industry works like this.
Sad truth is that generig software doesn't work that well on any of the ATMs out there.
Generally speaking I belive every vendor creates their own XFS manager. The used XFS manager is pretty generic though so who ever the XFS manager provider is is not that a big deal. Actual device and service provider implementations are the real differences.
So you could write your software to a common subset of the features and you could even get a decent level of operability using that aproach. Well until you need to start and handle the error cases that is. The limitations would at this point create situations that just make that generic software useless in practice.
Reason to that is simply because all the devices are so different on implementation level and thus can do different things during and after error conditions.
So even though the CEN/XFS error codes might be the same for two vendors the required operations can be quite a bit different as their responses may indicate different severity or the error condition might be even self clearing on one, but may require operator intervention on an other one.
Because you naturally want all the available benefits from the hardware you have so at that point we start to need configuration options that are just outside the scope of CEN/XFS. After you go that way you start to get the benefits of the hardware, but that also means higher complexity to your software. Oh and you'll need lots and lots of testing as sadly you can't really trust vendor documentation either...

Should you maintain separate version numbers of your web-based interface and APIs?

Suppose you are developing a platform which has a web-based interface for its users and APIs for third-party developers. Something similar to Salesforce (or even Facebook).
Salesforce and Facebook, both platforms have normal web-based interface for its users and APIs for third party developers.
Ideally any API internally calls the same function which is being used by the web-based interface. For e.g. "Create a Project" button and "CreateProject" API calls the same "createProject()" function internally. So you can maintain the same version for both as in most cases they are tightly integrated.
Now sometimes you add a feature which makes you increment the minor version of the web-based interface but since you are not implementing that feature in API, API version should remain as is.
How do you handle such cases? Should you maintain separate versions of your web-based interface and APIs for your platform?
It Depends. Because It is difficult to offer a conclusive answer to this question. But I would share some ideas and drill-down some scenarios to help at best.
I would suggest there should be two versions of the api. internal apis and public apis. At a caller's end, they would be two physically distinct apis/end-points so that the security policies and a of lot of other things can be done without exposing much information as well as without relaying any responsibility on code to handle the distinction policy based on who's calling from where as that may quite easilyfail.
It is ok if both versions of the apis consolidate down the line to some extent without involving any security risk but a separate team of expert engineers can design this consolidation to be seamless yet safe. It's a trade-off of between code-reuse and everything else. This is very subjective and can turn into endless discussion. But the software evolves very well as result of this design process if it is agile and iterative.
The apis should be externalizable and inter operable. If the project is very large, then different teams working on separate parts of the project will interact with each other's work using internal apis only. No hanky-panky. No direct data access. Only apis.
This approach will help you create awareness of what's being done in the project across all teams if the apis are discoverable which I personally believe is a very good thing for collaborative team work. In fact it also helps in re-usability. Testing becomes unified and automated. Every team will become responsible for their own work and hence it should be easy to address accountability.
There's more stuff that can go in here but I think this is sufficient at a high level.
IF allowed, I would also read this question as
"Should you have purely service oriented architecture or not ?"
And my answer would be, **It Depends.**Because It is difficult to offer a conclusive answer to this...
Do not publish core function directly via API, instead create all API functions as proxy functions and all changes in core functions will be handled in proxy functions.
Change public api version if there is change in API input/output.
This way you could achieve code re-usability and it does not increase public API version frequently.
Edit:
If you are talking about software version number. My answer is Yes.

Possible to share information between an add-on to an existing program and a standalone application? [duplicate]

I'm looking at building a Cocoa application on the Mac with a back-end daemon process (really just a mostly-headless Cocoa app, probably), along with 0 or more "client" applications running locally (although if possible I'd like to support remote clients as well; the remote clients would only ever be other Macs or iPhone OS devices).
The data being communicated will be fairly trivial, mostly just text and commands (which I guess can be represented as text anyway), and maybe the occasional small file (an image possibly).
I've looked at a few methods for doing this but I'm not sure which is "best" for the task at hand. Things I've considered:
Reading and writing to a file (…yes), very basic but not very scalable.
Pure sockets (I have no experience with sockets but I seem to think I can use them to send data locally and over a network. Though it seems cumbersome if doing everything in Cocoa
Distributed Objects: seems rather inelegant for a task like this
NSConnection: I can't really figure out what this class even does, but I've read of it in some IPC search results
I'm sure there are things I'm missing, but I was surprised to find a lack of resources on this topic.
I am currently looking into the same questions. For me the possibility of adding Windows clients later makes the situation more complicated; in your case the answer seems to be simpler.
About the options you have considered:
Control files: While it is possible to communicate via control files, you have to keep in mind that the files need to be communicated via a network file system among the machines involved. So the network file system serves as an abstraction of the actual network infrastructure, but does not offer the full power and flexibility the network normally has. Implementation: Practically, you will need to have at least two files for each pair of client/servers: a file the server uses to send a request to the client(s) and a file for the responses. If each process can communicate both ways, you need to duplicate this. Furthermore, both the client(s) and the server(s) work on a "pull" basis, i.e., they need to revisit the control files frequently and see if something new has been delivered.
The advantage of this solution is that it minimizes the need for learning new techniques. The big disadvantage is that it has huge demands on the program logic; a lot of things need to be taken care of by you (Will the files be written in one piece or can it happen that any party picks up inconsistent files? How frequently should checks be implemented? Do I need to worry about the file system, like caching, etc? Can I add encryption later without toying around with things outside of my program code? ...)
If portability was an issue (which, as far as I understood from your question is not the case) then this solution would be easy to port to different systems and even different programming languages. However, I don't know of any network files ystem for iPhone OS, but I am not familiar with this.
Sockets: The programming interface is certainly different; depending on your experience with socket programming it may mean that you have more work learning it first and debugging it later. Implementation: Practically, you will need a similar logic as before, i.e., client(s) and server(s) communicating via the network. A definite plus of this approach is that the processes can work on a "push" basis, i.e., they can listen on a socket until a message arrives which is superior to checking control files regularly. Network corruption and inconsistencies are also not your concern. Furthermore, you (may) have more control over the way the connections are established rather than relying on things outside of your program's control (again, this is important if you decide to add encryption later on).
The advantage is that a lot of things are taken off your shoulders that would bother an implementation in 1. The disadvantage is that you still need to change your program logic substantially in order to make sure that you send and receive the correct information (file types etc.).
In my experience portability (i.e., ease of transitioning to different systems and even programming languages) is very good since anything even remotely compatible to POSIX works.
[EDIT: In particular, as soon as you communicate binary numbers endianess becomes an issue and you have to take care of this problem manually - this is a common (!) special case of the "correct information" issue I mentioned above. It will bite you e.g. when you have a PowerPC talking to an Intel Mac. This special case disappears with the solution 3.+4. together will all of the other "correct information" issues.]
+4. Distributed objects: The NSProxy class cluster is used to implement distributed objects. NSConnection is responsible for setting up remote connections as a prerequisite for sending information around, so once you understand how to use this system, you also understand distributed objects. ;^)
The idea is that your high-level program logic does not need to be changed (i.e., your objects communicate via messages and receive results and the messages together with the return types are identical to what you are used to from your local implementation) without having to bother about the particulars of the network infrastructure. Well, at least in theory. Implementation: I am also working on this right now, so my understanding is still limited. As far as I understand, you do need to setup a certain structure, i.e., you still have to decide which processes (local and/or remote) can receive which messages; this is what NSConnection does. At this point, you implicitly define a client/server architecture, but you do not need to worry about the problems mentioned in 2.
There is an introduction with two explicit examples at the Gnustep project server; it illustrates how the technology works and is a good starting point for experimenting:
http://www.gnustep.org/resources/documentation/Developer/Base/ProgrammingManual/manual_7.html
Unfortunately, the disadvantages are a total loss of compatibility (although you will still do fine with the setup you mentioned of Macs and iPhone/iPad only) with other systems and loss of portability to other languages. Gnustep with Objective-C is at best code-compatible, but there is no way to communicate between Gnustep and Cocoa, see my edit to question number 2 here: CORBA on Mac OS X (Cocoa)
[EDIT: I just came across another piece of information that I was unaware of. While I have checked that NSProxy is available on the iPhone, I did not check whether the other parts of the distributed objects mechanism are. According to this link: http://www.cocoabuilder.com/archive/cocoa/224358-big-picture-relationships-between-nsconnection-nsinputstream-nsoutputstream-etc.html (search the page for the phrase "iPhone OS") they are not. This would exclude this solution if you demand to use iPhone/iPad at this moment.]
So to conclude, there is a trade-off between effort of learning (and implementing and debugging) new technologies on the one hand and hand-coding lower-level communication logic on the other. While the distributed object approach takes most load of your shoulders and incurs the smallest changes in program logic, it is the hardest to learn and also (unfortunately) the least portable.
Disclaimer: Distributed Objects are not available on iPhone.
Why do you find distributed objects inelegant? They sounds like a good match here:
transparent marshalling of fundamental types and Objective-C classes
it doesn't really matter wether clients are local or remote
not much additional work for Cocoa-based applications
The documentation might make it sound like more work then it actually is, but all you basically have to do is to use protocols cleanly and export, or respectively connect to, the servers root object.
The rest should happen automagically behind the scenes for you in the given scenario.
We are using ThoMoNetworking and it works fine and is fast to setup. Basically it allows you to send NSCoding compliant objects in the local network, but of course also works if client and server are on he same machine. As a wrapper around the foundation classes it takes care of pairing, reconnections, etc..

Integrating my RESTful web app with clients' SAP installations

My company runs a couple of B2B apps (written in Rails) dealing with parts and inventory and we've been trying to figure out the best way to integrate with some of our bigger users. We already offer the REST-style API that comes with Rails, but that, of course requires an IT Department on their end to decide to integrate it, so we'd like to lower that barrier if possible.
From what we've found, most of them are on SAP systems. Now, pretty much all I know about SAP is it's 1) expensive, 2) huge, 3) and does everything and anything you could ever need for your gigantic business to run. Naturally, this is all a bit imposing, and the resources on the site are a cross between impenetrable buzz-word laden sales material, and impenetrable jargon laden advanced technical material with little for the new, but technically competent user to be able to sink his teeth into.
So what I'm wondering is: as a 3rd party, that's not running a SAP installation, is there a way for us to offer access to our site's data through a web service or other API? Is it just a matter of providing or implementing a certain WSDL (and what would that be)? Is this feasible for someone without in-depth experience with SAP? Or is this a complete non-starter?
I'd say it's not possible without someone who knows the SAP system. You probably won't need to hire someone with in-depth SAP knowledge, but at least for the initial implementation, you'll need both the knowledge and a working system you can develop against. Technically speaking, it's not really that hard, but considering the fact that SAP systems are designed to handle multiple organizations, countries, legal systems, localizations and several thousands of users simultaneously, things are bound to be a bit more complex than almost any other software around - and most of the time not even bloated, it's just easy to get lost in that kind of flexibility.
My recommendation would be to find a customer (or a prospective customer) who has someone in their IT department with the necessary technical and processual knowledge and who is interested in conducting a development project. This way, you'd get access to a real system (testing of course) and someone who can explain to you the basics of the system. But, as I said, be prepared for complexity.
vwegert makes some excellent points.
As to this part of your question:
So what I'm wondering is: as a 3rd
party, that's not running a SAP
installation, is there a way for us to
offer access to our site's data
through a web service or other API? Is
it just a matter of providing or
implementing a certain WSDL (and what
would that be)?
Technically it is possible to expose any of your system's services as web-services to a client's SAP system. In order to do this you do not need any prior knowledge of SAP. (SAP should be able to import a WSDL, although there may be some limitations in the earlier pre-ECC5 systems).
For example a service that provides meter reads, airport departure schedules, industry trends etc is not dependend of what is in the user's system or how they set it up. However as soon as there is a need to initiate updates to the client system's data is when you need access to more specialised SAP knowledge.
Also note that many SAP functions can also be exposed as web services, but generally you do need someone with SAP (ABAP) knowledge to do this.
The ABAP language is actually fairly simple, but there is a huge learning curve to understand the data model and the myriad of configurable options in SAP.

What research-operating-system features would you advocate including in Google Chrome Operating System

Imagine that a large player is undertaking the construction of a new operating system, where backward compatibility requirements are limited to:
Run existing applications written in (or compiled to) JavaScript which are presented in HTML5 and styled with CSS3
Plug and play support for printers, external storage, and optical drives
Degrade gracefully when disconnected from the internet
Sufficient process quotas to support safely permitting tasks to run in the background, including timers
What specific features from existing research operating systems (such as Plan 9) would you like to see enter the mainstream through this channel? Please limit your suggestions to things that have been implemented, and provide a link to the implementation (or at least search terms).
From the Plan 9 docs:
Plan 9 began in the late 1980’s as an
attempt to have it both ways: to build
a system that was centrally
administered and cost-effective using
cheap modern microcomputers as its
computing elements.
Netbooks qualify as cheap modern microcomputers, and The Cloud qualifies as centrally administered. There is an opportunity to implement the features (in DDaviesBrackett's words) that we want netbooks to have other than by extending a 1970's time-sharing OS; the research operating systems may have proved the value of alternatives by example.
From the Plan 9 FAQ:
Subject: What are its key ideas?
Plan 9 exploits, as far as possible,
three basic technical ideas: first,
all the system objects present
themselves as named files that are
manipulated by read/write operations;
second, all these files may exist
either locally or remotely, and
respond to a standard protocol; third,
the file system name space - the set
of objects visible to a program - is
dynamically and individually
adjustable for each of the programs
running on a particular machine. The
first two of these ideas were
foreshadowed in Unix and to a lesser
extent in other systems, while the
third is new: it allows a new
engineering solution to the problems
of distributed computing and graphics.
Plan 9's approach means that
application programs don't need to
know where they are running; where,
and on what kind of machine, to run a
Plan 9 program is an economic decision
that doesn't affect the construction
of the application itself.
Does that not appear to be an excellent fit for the netbook/Cloud domain?
What operating system features I would advocate for Chrome OS?
Here my wish list as a Plan 9/Inferno fan:
Resources (ip stack, graphics, etc) as file systems.
Network transparent file system (ie., 9P).
Private per-process namespaces.
Factotum-like auth system (ie., no root user).
Pure UTF-8 everywhere.
Extremely lightweight processes.
Automatic snapshot and de-duplicating storage (ala venti+fossil).
And I guess many others, but this would be enough to make me quite happy.
This is not a 'OS feature' per see, but I would love to have a GUI with mouse-chording.
None.
I'd prefer for a new consumer OS, especially one targeted at Netbooks, to be very very good at doing the things that we already want OSes to be able to do rather than having time spent on features that are, by their nature, experimental.
(Of course, I'd be totally un-bothered by features I wasn't forced to use to develop on the platform; other people's toys are welcome as long as they don't make my job harder.)
I really think that Google might look into Plan9 for inspiration actually. Hearsay (the Internet) claims that several of those that initially developed UNIX and then later scrapped it for a better design (Plan9) are employed by Google. Google is also hosting its own version of Inferno, but I am not sure whether this is any central part of their plan. Further "evidence" could be that the plan9 authorization system (p9auth) for Linux was published by a Google researcher. The third "evidence" would be that Google claim that Chrome OS will have a novel security architecture.
The authorization seems to me to be one of the GREATEST parts of the Plan9 that can be included right now (/net would also be nice but there is no working code for that yet). The idea that a program that needs root access only gets limited access to the parts that are determined by the authorization server is definitely a great step forward compared to the now prevalent user/superuser/root division in Linux, where "a man in the middle" attacks can (theoretically) be done by gaining (full, as opposed to limited by the authorization server) root access via a bug in a program granted root.