Step by step developing AUTOSAR's Software Component - development-environment

I'm a new learner about the AUTOSAR and already understand about summary of AUTOSAR Architecture. I have read AUTOSAR_TR_Methodology.pdf as my starting point to developing AUTOSAR's sofware components (SWC). For another information, I should get the "system extract" from the main organization and I will add my SWC into it. In that document, the task that I have to do to develop SWC described one by one as a whole process, but not in sequence. So my question is, after I got the system extract what the task that required to do to make SWCs? It will be great if the tools is mentioned.

The system extract usually contains software-components, albeit usually in form of so called compositions (in AUTOSAR lingo: CompositionSwComponentType). These compositions come with defined PortPrototypes which in turn are typed by PortInterfaces.
The task of the designer of an application software-component (technically speaking: an ApplicationSwComponentType) is to connect to the PortPrototypes define on the composition level and then specify the internal behavior (SwcInternalBehavior) that formally defines the internal structure of a software-component. On this basis the function of the software-component can be implemented.
A software-component itself consists of the formal specification (serialized in the ARXML format) and the corresponding C code that implements the actual function of the software-component.
There are tons of tools out there to develop AUTOSAR software-components. Most of these are commercial, and require a license. On top of that, the toolchain to be applied for a given project is in many cases predefined and you may not be able to select your tools freely.
If you seriously want to dive into AUTOSAR I'd strongly advise taking a class offered by the various tool vendors, preferably a class held by the tool vendor selected for a given actual ECU project.

Related

What is different meaning of source and code in programming?

Excerpt From: Robert C. Martin. “Clean Architecture: A Craftsman's Guide to Software Structure and Design (Robert C. Martin Series).”
“Now, what do we mean by the word “module”? The simplest definition is just a source file. Most of the time that definition works fine. Some languages and development environments, though, don’t use source files to contain their code. In those cases a module is just a cohesive set of functions and data structures.”
I got confused about "source" and "code" here. What does him meaning when he wrote "don’t use source files to contain their code"?
Thanks your explains.
"code" can be any statement or expression. "Source code" and "code" can be used interchangeably.
A file system file (ie. helloworld.c) can contain any number of statements. However, some development environments do not use files to store their code. Conceptually it could be just on the web and stored in a database, or the code could be in-memory only, and so not in a file.
In this particular passage, Mr. Martin (also known as Uncle Bob) is trying to illustrate that a module can mean different things depending upon the language or development environment. It is better to concentrate on the last line you quoted: a module is just a cohesive set of functions and data structures. as this is a sufficient definition without getting confused about the storage location of the code.

In what way is gobject facilitating binding?

On the official website of gobject, we can read:
GObject, and its lower-level type system, GType, are used by GTK+ and most GNOME libraries to provide:
object-oriented C-based APIs and
automatic transparent API bindings to other compiled or interpreted
languages
The first part seems clear to me but not the second one.
Indeed, when talking about gobject and binding, the concept introduced is often gobject-intropspection, but as far as I understand, gobject-introspection can be used to create .gir and .typelib for any documented C library, not only for gobject-based library.
Therefore I wonder what makes gobject particularly binding-friendly.
as far as I understand, gobject-introspection can be used to create .gir and .typelib for any documented C library, not only for gobject-based library.
That's not really true in practice. You can do some very basic stuff, but you have to write the GIR by hand (instead of just running a program which scans the source code). The only ones I'm aware of are those distributed with gobject-introspection (the *.gir files, the *.c files there are to avoid cyclical dependencies), and even those are generally only a fairly small subset of the C API.
As for other features, almost everything in GObject helps… the basic idea is that bindings often need RTTI. There are types like GValue (a simple box to store a value + type information), GClosure (for callbacks), properties and signals describe themselves with GTypes, etc. If you use GObjects (instead of creating a new fundamental type) you get run-time data about inheritance and interfaces, and GObject's odd construction scheme even allows other languages to subclass types declared in C.
The reason g-ir-scanner can't really do much on non-GObject libraries is that all that information is missing. After scanning the source code looking for annotations, g-ir-scanner will actually load the compiled module and use GObject's API to grab this information (which makes cross-compiling painful). In other words, GObject-Introspection is a much smaller project than you think… a huge percentage of the data it needs it gets from the GObject API.

IBM Rational DOORS: Use case extensions?

I'm investigating how to implement use case extension points in DOORS requirements. To this end, I wanted to know if a DOORS object in one module can reference or link to a DOORS object in a second module. If so, I figure I can have my use cases with the extension points in a high level module, then I can have extension variations in a separate DOORS module, with each one referencing the DOORS object with the extension point it is instantiating. Any thoughts on that?
You can definitely link an object in one DOORS module to another. That's one of the main features of DOORS so that you can track your requirements between modules. For example systems level statement of work type documents may be your initial source of requirements, kept in a high level module, and then that can be linked bidirectionally with finer grained software requirements, and then can be further linked to software components or lines of code or test cases.
If you right click a requirement you should see the "Link" option I believe it's called. I think you need to have both modules open, or it at least makes it easier.
You can only link from one object to another. The objects can be in different modules.
Sometimes we wanted to link to a whole module. We created a first "title" object in the module we wanted to link to, and we linked to that.
DOORS is really dismal. Glad we ditched it.

How do visual programming languages work?

I'm exploring the possibility of presenting a visual interface to users of an application that would allow them to enter some custom code/functionality in a dataflow-ish style (a la Yahoo Pipes).
I'm wondering how, in Pipes for example, their visual editor could work. Could the visual code be compiled into a textual language which to be stored in a database? Or could the individual blocks, connectors, variables, etc. all be stored in a database?
What about visual programming language IDEs like Microsoft's Visual Studio? Is the code interpreted straight from the visual interface?
What you see on the screen is the top of the iceberg. Components are different size, difficult or simple programs, with public interfaces. In dataflow programming, these interfaces are producers and consumers (outputs and inputs), so the component can visualised as a black box with pins on its input and output sides. When you connect pins (ports), you lead one program's output to another program's input. The components are pre-compiled for you, they are ready to run, you have just set their consumers (inputs) and producers (outputs) by connecting them. That's why they are black boxes: they're programs, which you can't change (except if you got the source code).
The components are designed to be connected to others. Some cases, components can run stand-alone, but usually they have to be connected to do the complete work. Basicly, there are three kind of components:
- source: generates output (which requires further processing or displaying),
- process: receives input, processes it, then passes it to further processing or displaying,
- sink: receives input, displays or saves it, and doesn't pass it to anyone.
A typical complete dataflow construction contains a source-process-process-sink chain, where the number of process-type components can be even zero (the data generated by source is displayed by a sink component). You can think for that three components as they were one program before, but they've had broken, and you can now re-assemble them.
One of the most well known dataflow system is Unix shell. The CLI commands are the components. They're precompiled, you just define a chain by putting "|" between them. Also, most "source" commands can be used stand-alone, like ls, and most "sink" components can receive input from file defined as argument, e.g. more.

Best approach to perform a CMMI Physical Configuration Audit?

The organization I currently work for an organization that is moving into the whole CMMI world of documenting everything. I was assigned (along with one other individual) the title of Configuration Manager. Congratulations to me right.
Part of the duties is to perform on a regular basis (they are still defining regular basis, it will either by quarterly or monthly) a physical configuration audit. This is basically a check of source code versions deployed in production to what we believe to be the source code versions in production.
Our project is a relatively small web application with written in Java. The file types we work with are java, jsp, xml, property files, and sql packages.
The problem I have (and have expressed but seem to be going ignored) is how am I supposed to physical log on to the production server and verify file versions and even if I could it would take a ridiculous amount of time?
The file versions are not even currently in the file(i.e. in a comment or something). It was suggested that we place visible version numbers on each screen that is visible to the users also. I thought this ridiculous also, since the screens themselves represent only a small fraction of the code we maintain.
The tools we currently use are Netbeans for our IDE and Serena Dimensions as our versioning tool.
I am specifically looking for ideas on how to perform this audit in a hopefully more automated way, that will be both accurate and not time consuming.
My idea is currently to add a comment to the top of each file that contains the version number of that file, a script that runs when a production build is created to create an XML file or something similar containing the file name and version file of each file in the build. Then when I need to do an audit I go to the production server grab the the xml file with the info, and compare it programmatically to what we believe to be in production, and output a report.
Any better ideas. I know this has to have been done already, and seems crazy to me that I have not found any other resources.
You could compute a SHA1 hash of the source files on the production server, and compare that hash value to the versions stored in source control. If you can find the same hash in source control, then you know what version is in production. If you can't find the same hash in source control, then there are untracked modifications in production and your new job title is justified. :)
The typical trap organizations fall into with the CMMI is trying to overdo everything. If I could suggest anything, it'd be start small & only do what you need. So consider any problems that you may have had in the CM area peviously.
The CMMI describes WHAT an organisation should do, but leaves the HOW up to you. The CMMI specification, chapter 2 is well worth a read - it describes the required, expected, and informative components of the specification - basically the goals are required, the practices are expected, and everything else is informative. This means there is only a small part of the specification which a CMMI appraiser can directly demand - the goals. At the practice level, it is permissable to have either the practices as described, or acceptable alternatives to them.
In the case of configuration audits, goal SG3 is "Integrity of baselines is established and maintained". SP3.2 says "Perform configuration audits to maintain integrity of the configuration baselines." There is nothing stated here about how often these are done, or how long they may take.
In my previous organisation, FCA/PCA was usually only done as part of the product release process, and we used ClearCase as the versioning tool, with labels applied across the codebase to define baselines. We didn't have version numbers in all the source files, nor did we have version numbers on all the products screens - the CM activity was doing the right thing & was backed up by audits, and this was never an issue in any CMMI appraisal.
We could use the deltas between labels to look at what files had changed, perform diffs to see the actual code changes. An important part of the process is being able to link those changes back to either a requirement/bug report/whatever the reason was which initiated the change.
Our auditing did use scripts to automate the process, but these were in-house developed scripts are specific to ClearCase - basically they would list all the files, their versions in the CM system, and the baseline/config item to which they belonged.
can't you use your source control for this? if you deploy a version and tag your sourcecontrol with that deployment, you can then verify against the source control system