How do visual programming languages work? - ide

I'm exploring the possibility of presenting a visual interface to users of an application that would allow them to enter some custom code/functionality in a dataflow-ish style (a la Yahoo Pipes).
I'm wondering how, in Pipes for example, their visual editor could work. Could the visual code be compiled into a textual language which to be stored in a database? Or could the individual blocks, connectors, variables, etc. all be stored in a database?
What about visual programming language IDEs like Microsoft's Visual Studio? Is the code interpreted straight from the visual interface?

What you see on the screen is the top of the iceberg. Components are different size, difficult or simple programs, with public interfaces. In dataflow programming, these interfaces are producers and consumers (outputs and inputs), so the component can visualised as a black box with pins on its input and output sides. When you connect pins (ports), you lead one program's output to another program's input. The components are pre-compiled for you, they are ready to run, you have just set their consumers (inputs) and producers (outputs) by connecting them. That's why they are black boxes: they're programs, which you can't change (except if you got the source code).
The components are designed to be connected to others. Some cases, components can run stand-alone, but usually they have to be connected to do the complete work. Basicly, there are three kind of components:
- source: generates output (which requires further processing or displaying),
- process: receives input, processes it, then passes it to further processing or displaying,
- sink: receives input, displays or saves it, and doesn't pass it to anyone.
A typical complete dataflow construction contains a source-process-process-sink chain, where the number of process-type components can be even zero (the data generated by source is displayed by a sink component). You can think for that three components as they were one program before, but they've had broken, and you can now re-assemble them.
One of the most well known dataflow system is Unix shell. The CLI commands are the components. They're precompiled, you just define a chain by putting "|" between them. Also, most "source" commands can be used stand-alone, like ls, and most "sink" components can receive input from file defined as argument, e.g. more.

Related

Difference between template and yaml file in DevOps

I just started reading about DevOps on Microsoft site and I come across these 2 files. I'm confused about the process needing 2 files, with YAML using variable and template using parameter...At moment, I'm still unclear they have to separate between variable and parameter?
They are talking about ARM Templates. While you can have Parent/Child Templates it's more complex and I've never seen an example that split Variables and Parameters to different files.
To clear up the confusion ARM Templates have 5 main parts:
Variables things you know at design time like VNet address.
Parameters things you know at runtime and don't want stored in the template's (YAML format) script such as Usernames and Passwords.
Functions things you compute in your template.
Resources things you are provisioning.
Outputs things you can reuse, such as a Url result of provisioning a Azure Web App.

Step by step developing AUTOSAR's Software Component

I'm a new learner about the AUTOSAR and already understand about summary of AUTOSAR Architecture. I have read AUTOSAR_TR_Methodology.pdf as my starting point to developing AUTOSAR's sofware components (SWC). For another information, I should get the "system extract" from the main organization and I will add my SWC into it. In that document, the task that I have to do to develop SWC described one by one as a whole process, but not in sequence. So my question is, after I got the system extract what the task that required to do to make SWCs? It will be great if the tools is mentioned.
The system extract usually contains software-components, albeit usually in form of so called compositions (in AUTOSAR lingo: CompositionSwComponentType). These compositions come with defined PortPrototypes which in turn are typed by PortInterfaces.
The task of the designer of an application software-component (technically speaking: an ApplicationSwComponentType) is to connect to the PortPrototypes define on the composition level and then specify the internal behavior (SwcInternalBehavior) that formally defines the internal structure of a software-component. On this basis the function of the software-component can be implemented.
A software-component itself consists of the formal specification (serialized in the ARXML format) and the corresponding C code that implements the actual function of the software-component.
There are tons of tools out there to develop AUTOSAR software-components. Most of these are commercial, and require a license. On top of that, the toolchain to be applied for a given project is in many cases predefined and you may not be able to select your tools freely.
If you seriously want to dive into AUTOSAR I'd strongly advise taking a class offered by the various tool vendors, preferably a class held by the tool vendor selected for a given actual ECU project.

Why Does a .dll file allow programs to be modularized?

Excerpt From Micrsoft's "What is a .dll?":
"By using a DLL, a program can be modularized into separate
components. For example, an accounting program may be sold by module.
Each module can be loaded into the main program at run time if that
module is installed. Because the modules are separate, the load time
of the program is faster, and a module is only loaded when that
functionality is requested. Additionally, updates are easier to apply
to each module without affecting other parts of the program. For
example, you may have a payroll program, and the tax rates change each
year. When these changes are isolated to a DLL, you can apply an
update without needing to build or install the whole program again."
Ref:http://support.microsoft.com/kb/815065
DLL's are:
loaded at runtime
can "dynamically loaded" (by multiple programs at the same time)
- which allows saving of resources
- lowers disk space requirements
But why do they promote "modulizing" programs?What would happen if there weren't .dll files?Could someone provide/expand on the example
Modular programs provide a way of making a particular functionality available to many programs without having to include the same code in all of them. Also, they allow greater compatibility between programs since they would essentially use the same methods in common DLLs to obtain the same results.
One would write a program in a modular fashion such that different parts of the program could be maintained separately. Say you had some clever way of reading and writing your own data format to files. Say you make improvements to that technique. If the code for reading and writing the files lived in a DLL, you would only need to update the DLL. The program itself would remain unchanged.
If you have one monolithic EXE, you have to
pay for all the extra time relinking it, even if 1 source file changed (this is painful if it's > 80 MB, as is the case in large projects),
ship the entire EXE, when you could only ship a single DLL which is a fraction of the size (for patches/updates).
Breaking it up into DLLs you
have pluggability: The EXE is the host application and others can write DLLs that "plug into" the host via a well-defined interface. DLLs can be interchanged as long as they conform to the interface.
can share code across other DLLs and EXEs.
can have some DLLs be optionally loaded on demand, only if they're used, and unloaded when they're not needed
similar to above, have optional functionality. With a single EXE you have to download everything, even if some components are rarely used. With DLLs, you could have a system that downloads and installs features as needed.
The biggest advantage of dlls is probably during development of the original program. Without dlls you wouldn't be able to integrate with existing libraries without including the original source code. By including an existing library as a dll you don't need the source since it's all encapsulated in the dll. It would be a nightmare to develop in frameworks like .Net without dlls since you constantly include other libraries...
The alternative to breaking your program down in n > 1 pieces is to keep it in n == 1 piece. Why is this bad? Well it isn't always bad (maybe the BIOS is a good example?). But for user programs it usually is. Why? First we need to define what a program is.
What is a program?
A simple "program", roughly speaking, consists of an entry point (i.e. offset to the main function), functions and global variables. A function consists of instructions and information about what local variables are needed to run the function. To be executed a program must be loaded in primary memory/RAM (the aforementioned information). Because our program has functions (and not just jump statements), that implies the existence of a stack, which implies the existence of a containing environment managing the stack. (I suppose you could have a program that manages its own stack but I'd argue then your program is not a program anymore but an environment.) This environment contains the program, starts in the entry point and executes each instruction, be it "go to this part of the RAM and add it to whatever is in this register" or "If this register is all 0 then jump ahead this many instructions and resume execution there" indefinitely or until the program gives control back to its environment. (This is somewhat simplified - context switches in multi-process environments, illegal memory access, illegal instructions, etc. can also cause control to be taken from the program.)
Anyway, so we have two options: either load the entire program at once or have it stored and loaded in pieces.
n == 1
There are some advantages to doing it all at once:
Once the program is in memory no disk access is required to execute further (unless the program explicitly asks there to be).
Since the program is compiled/linked before execution begins you can do everything without any sort of string names/comparisons - go directly to the address (or an offset).
Functions are never out of sync with one another.
n > 1
There are some disadvantages, though, which mirror the advantages:
Most programs don't execute all code paths most of the time. I think there's some studies that in most programs most of the time spent executing is spent in a fraction of the instructions present in the program. In other words something like 20% of the program is executed 80% of the time (I just made that particular figure up - but you get the idea). If we divide our program up enough and only load instruction sets (i.e. functions) as they are needed then we won't waste time loading the 80% we'll never use this execution of the program. Along these lines we can ultimately fit more concurrently executing programs in our RAM at once if we only end up loading the fraction of the program we need.
Most programs share similar functions (i.e. storing data/trees/hashes/sorting/etc., reading input, writing output, etc.) and if each program has its own local copy then you can't reuse instruction code.
Many programs depend on the existence of others and are maintained by separate companies/groups/individuals. By releasing versioned modules we don't have to synchronize releases all the time.
Conclusion
These aren't the only points to consider but the first ones that came to my mind. I'd recommend reading about compilers, linkers and operating systems. That will answer this question more thoroughly than I and other questions I'm sure this has brought up. To recap dll's aren't the "best" way of packaging executable programs in all situations and circumstances - they have a particular use and advantages and disadvantages.

XNA Automatic XNB Serialization: what can it do and what is it best used for?

I've seen some small examples posted by Shawn Hargreaves showing manually defining some xml content with the intent to create and populate instances of c# classes, which get loaded through the content pipeline.
I can see that this would be useful if you had an editor capable of writing the file, allowing you to load a level or whatever.
But, I'm curious... does it only do read operations? Can you use this concept to save game data?
What else might it be used for?
Automatic XNB Serialization (details) is simply the ability for the content pipeline to read/write data without needing a ContentTypeWriter or ContentTypeReader. It doesn't actually provide any new features, besides reducing the amount of code you have to write to use the content pipeline.
Loading from XML (using IntermediateSerializer or through the content pipeline using the XML importer) is a separate thing.
IntermediateSerializer automatically converts .NET object instances to/from XML using reflection.
The content pipeline (whether you are using automatic XNB serialization or not) converts .NET object instances to/from binary XNB files.
The content pipeline also provides an extensible importer/processor system, on the content-build side, for generating the .NET objects in the first place (and it includes various built-in importers/processors). The built-in XML importer just uses IntermediateSerializer to go from XML to a .NET object instance.
The reason why you can't use these in your game to perform write operation (eg: saving the game state, a built-in level editor, etc) is that IntermediateSerializer (both read and write) and the writing-XNB half of the content pipeline require XNA Game Studio to be installed.
XNA Game Studio is not a redistributable. (Only the XNA runtime is redistributable.)

Cross platform file-access tracking

I'd like to be able to track file read/writes of specific program invocations. No information about the actual transactions is required, just the file names involved.
Is there a cross platform solution to this?
What are various platform specific methods?
On Linux I know there's strace/ptrace (if there are faster methods that'd be good too). I think on mac os there's ktrace.
What about Windows?
Also, it would be amazing if it would be possible to block (stall out) file accesses until some later time.
Thanks!
The short answer is no. There are plenty of platform specific solutions which all probably have similar interfaces, but they aren't inherently cross platform since file systems tend to be platform specific.
How do I do it well on each platform?
Again, it will depend on the platform :) For Windows, if you want to track reads/writes in flight, you might have to go with IFS. If you just want to get notified of changes, you can use ReadDirectoryChangesW or the NTFS change journal.
I'd recommend using the NTFS change journal only because it tends to be more reliable.
On Windows you can use the command line tool Handle or the GUI version Process Explorer to see which files a given process has open.
If you're looking for a get this information in your own program you can use the IFS kit from Microsoft to write a file system filter. The file system filter will show all file system operation for all process. File system filters are used in AV software to scan files before they are open or to scan newly created files.
As long as your program launches the processes you want to monitor, you can write a debugger and then you'll be notified every time a process starts or exits. When a process starts, you can inject a DLL to hook the CreateFile system calls for each individual process. The hook can then use a pipe or a socket to report file activity to the debugger.