What license does designer/wizard-generated code fall under? [closed] - ide

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
This might be a stupid question but I just wanted to make sure...
If I incorporate code generated by the IDE (Visual Studio in this case) in my software, can I apply my own license to that code or is it subject to its own license?

In the general case you should read carefully the licence that comes with your wizard/code generator.
In the vast majority of cases, the code produced by a wizard (or a compiler or a pre-processor, etc) is a completely separated entity from the generator itself and no restriction is applied to it.
There are cases, though, where copyrighted code could be inserted in the generated code, for example as a set of functions to support the generated code.
Also in this case most of the code generators state that that piece code is licensed under very liberal terms. Trying to limit code modification and redistribuition or to impose run-time royalties has demonstrated itself to be a very poor business model. I've seen it used by old program-generators on a mainframe for example, but not much since then.
So, in 99.9% of the cases you are ok with doing whatever you want with the generated code, just read the fine print to cover the remaining 0.1%

I am not a lawyer, but I believe that generated code is basically the same as any other program’s output based on your input. In this case the output is generally considered to be owned by the application’s user (you) and not the application’s developer.
The GPL FAQ covers a similar topic:
Is there some way that I can GPL the
output people get from use of my
program? For example, if my program is
used to develop hardware designs, can
I require that these designs must be
free?
In general this is legally impossible;
copyright law does not give you any
say in the use of the output people
make from their data using your
program. If the user uses your program
to enter or convert his own data, the
copyright on the output belongs to
him, not you. More generally, when a
program translates its input into some
other form, the copyright status of
the output inherits that of the input
it was generated from.
So the only way you have a say in the
use of the output is if substantial
parts of the output are copied (more
or less) from text in your program.
For instance, part of the output of
Bison (see above) would be covered by
the GNU GPL, if we had not made an
exception in this specific case.
You could artificially make a program
copy certain text into its output even
if there is no technical reason to do
so. But if that copied text serves no
practical purpose, the user could
simply delete that text from the
output and use only the rest. Then he
would not have to obey the conditions
on redistribution of the copied text.

The code that is generated by VS is based on your input so in fact you're just "compiling" from a higher level language (dataset designer or forms designer) to a lower level language, C# or VB. I don't think this is different than a compiler that generates machine code or IL based on your source-code.

Related

Is it possible to beat the Assembler today? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Apparently it used to be, according to this terrific account by Ed Nather. How about today? That is, is it possible, with enough knowledge of CPU/FPU/GPU/etc. architecture, to write machine code that is more efficient than what would be produced by a mainstream assembler (nasm, GAS, etc.), in any scenario? How about for GPU kernels?
EDIT: "Not constructive"? Please. This question produced #Pointy's answer, which was quite enlightening to anyone not that familiar with how assemblers work. Someone has favorited it. The fact that Pointy is, endearingly, one of the close-voters is a nice touch but hey if it's the best answer it gets accepted.
Two things:
There's no single thing called "assembly language". An assembler is a program that translates a textual encoding of instructions for a particular architecture into a form suitable for execution. Exactly what facilities a particular assembler exposes is up to its designer. Many CPU architectures have several assemblers available.
Because an assembler's job is to provide a "friendly" way for a person to request a precise sequence of machine instructions (and other aspects of a program, such as initialized memory locations, reserved blocks of storage, directives to the runtime executive, etc), if it's possible to produce a program by hand that can't be produced by some particular assembler then that really only means you've got an inadequate assembler. The assembler that Intel developed for the iAPX 86 series (not Microsoft's masm, which was a weak imitator) had a fairly typical macro facility, and it also had a sort of "micro macro" facility that would allow the dictionary of opcode mnemonics (things like MOV, ADD, BNE, etc) to be extended arbitrarily. With an assembler like that, it would clearly be possible to create any piece of code you desired.
The real topic for concern in programming is whether burdening the programmer with the responsibility for choosing a strategy for getting work done by a computer in extreme detail is worthwhile for performance. The question of course has no single answer, because there are many possible situations, many different computing devices, and mostly because things change all the time. In 1959, for example, the computing task of translating a higher-level language like FORTRAN into machine code was itself a significant workload for computers. Understanding of how programming languages should even work was in its infancy.
Today, then, the only reason to know "machine language" (and note that the word "language" isn't really accurate) is to create an instruction sequence when there's no available (or convenient) assembler. That's assuming that explicitly creating a particular instruction sequence is better than using a higher-level language for some reason. Even then, it's generally the case that if you were doing that now you'd be writing software in some high-level language to emit the chosen instruction sequence; that is, you'd effectively create a "domain-specific assembler" for some task. A good example would be the code in something like a virtual machine interpreter that builds machine language blocks on-the-fly, like a Java or JavaScript VM.
The assembler takes assembly language and turns it into machine code. Ideally, but not always the case the assembly language has a one to one relationship with the machine code instruction. MOST of the time the translation from assembly language syntax for an instruction and the machine code will be identical whether the assembler does it or if it is done by hand. naturally there are some dont care bits from time to time and the assembler and human may choose different dont care bits so the result doesnt have to be a bit for bit match but lengthwise and speedwise they will be identical, no difference at all.
The differences between the human and the assembler software will be, if any, where the assembly language is not a one to one relationship with the machine code, and/or for various reasons the programmer wants the assembler to take care of something. This could be pseudo instructions, or macros, or things having to do with externally defined variables.
Assembly language is a loaded term as it is defined by the particular assembler, you can have many different and non-compatible assembly languages for the same processor. And you can have assembly languages where there are instances where the language does not completely describe all the information needed to choose the specific instruction, near vs far jumps for example for some instruction sets with some assemblers.
So if you want to compare apples to apples there will be no difference between hand assembled code and software assembled code. Apples to apples meaning the code in question is written properly to not be vague so the software and the human assemblers can assemble it. If you do find differences other than dont care bits, then it probably has to do with an optimization which has to do with the human assembler changing the code, to make it fair the matching assembly language can/should be changed to match. This difference would have nothing to do with human vs assembly language assemblers, but one programmers program as compared to anothers. Basically you could/would get the same result in assembly language with the software assembler.
A skilled assembly-language programmer who is targeting a very specific run-time environment can probably produce code which will run better than a compiler would produce. Depending upon the nature of the code, the performance improvement may or may not be significant relative to the amount of work required.
On the other hand, frameworks such as Java or .NET allow a programmer to compile software to an "intermediate" form which can, on demand, be translated into machine code which includes specific optimizations for the environment where it's actually running. Code which is compiled to run on "any platform", when run by framework engine which was hand-tweaked for the platform it's running on, may not run as well as assembly code that was hand-tweaked for that platform, but better than code which was hand-tweaked to optimize performance on some other platform.
"Apparently it used to be, according to this terrific account by Ed Nather. "
I remember reading a story like that decades ago.
It's been doing the rounds for a long time.
The blackjack bit is an embellishment but the code dropping out at the right point and the weeks spent trying to figure out how it worked before the penny finally dropped definitely rings a very old bell

Internal Source Code Documentation - FiM++ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
The structure of a FiM++ program requires that it end with the closing of a letter and the code author's name in a specific manner.
Dear Princess Celestia and Stack Exchange and String: A Sample:
...
Your faithful student, Southpaw Hare!
According to the language specification, the keyword "Your faithful student," (including the comma but not the following space) is used as an end tag for class definitions, and the following name is a comment with no syntactical effect.
The fact that the author is automatically included (if not strictly required) in every file makes me wonder if it can be used as a form of interpretable documentation akin to Java Docs. In other words, that other programs or editors would be able to parse out this name and use it in some manner.
What is the requirement of such internal comment-based documentation? Is there anything in this particular type of syntax that would cause problems?
Is the keyword sufficient to fit with the theme? It occurs to me that the lack of ability to use "Your faithful students," for a plural form (or possibly "Yours faithful," or "Yours truly," for an ambiguous version) would make listing multiple authors look awkward and unnatural (and looking like a natural human-written letter is one of the core design paradigms).
If creating a Java Docs methodology was considered, then what other features should be included? For one, a date seems common. Including some form of date comment at the top of the letter would probably look natural and not defy the design paradigm.
Since the language is new, unfamiliar to most, and honestly quite silly, here are a few resources to consider:
Original Release Announcement
October Followup
Sorry no one's given this any concern before me!
I'm heading development of the language, so I think I have a good grasp on the answer, here.
What is the requirement of such internal comment-based
documentation? Is there anything in this particular type
of syntax that would cause problems?
I've never considered an auto documentation technique like Javadoc, so there is no formal syntax for that. The compiler I'm working on completely discards comments, so it won't support it, but I'm sure it wouldn't be terribly hard.
Is the keyword sufficient to fit with the theme? It occurs to me
that the lack of ability to use "Your faithful students," for a
plural form (or possibly "Yours faithful," or "Yours truly," for an
ambiguous version) would make listing multiple authors look awkward
and unnatural (and looking like a natural pony-written letter is one
of the core design paradigms).
The idea of the author name on the last line was intended for the foremost author of the report, so multiple authors was never suggested before now. However, Your faithful students, would work nicely!
If creating a Java Docs methodology was considered, then what other
features should be included? For one, a date seems common. Including
some form of date comment at the top of the letter would probably
look natural and not defy the design paradigm.
Indeed! Perhaps something at the bottom of the report, like
(Written 2013-04-11)
Hope this help you. You have some great ideas, here, too! You should join the team!

How to document applications and how they integrate with other applications? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
As the years go by we get more and more applications. Figuring out if one application is using a feature from another application can be hard. If we change something in application A, will something in application B break?
We have been using MediaWiki for documentation, but it's hard to keep the data up-to-date.
I think what we need is some kind of visual map of everything. And the possibility to create some sort of reference integrity? Any ideas?
I'm in the same boat and still trying to sell my peers on Enterprise Architect, a CASE tool. It's a round trip tool - code to diagrams to code is possible. It's a UML centric too - although it also supports other methods of notation that I'm unfamiliar with...
Here are some things to consider when selecting a tool for documenting designs (be they inter-system communication, or just designing the internals of a single app):
Usability of the tool. That is, how easy is it to not only create, but also maintain the data you're interested in.
Familiarity with the notation.
A. The notation, such as UML, must be one your staff understands. If you try using a UML tool with a few people understanding how to use it properly you will get a big ball of confusion as some people document things incorrectly, and someone who understands what the UML says to implement either spots the error, or goes ahead and implements the erroneously documented item. Conversely more sophisticated notations used by the adept will confound the uninitiated.
B. Documentation isn't/shouldn't be created only for the documenters exclusive use. So those who will be reading the documentation must understand what they're reading. So getting a tool with flexible output options is always a good choice.
Cost. There are far more advanced tools than Enterprise Architect. My reasoning for using this one tool is that due to lack of UML familiarity and high pressure schedules, leaves little room to educate myself or my peers beyond using basic structure diagrams. This tool easily facilitates such a use and is more stable than say StarUML. (I tried both, StarUML died on the reverse engineering of masses of code -- millions of lines) For small projects I found StarUML adequate for home use, up until I got vista installed. Being opensource, it's also free.
With all that said, you will always have to document what uses what, that means maintaining the documentation! That task is one few companies see the value in despite its obvious value to those who get to do it. . .

Are scripts "open source" by definition? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
The other day, I tweaked a script for a friend's World of Warcraft addon. He was surprised that you could edit the addons—that they were "open source." (Word of Warcraft addons are written in the Lua scripting language) I found myself wanting to say "Sure you can—all scripts are 'open source'."
Is that true? Sure, some scripts can be compiled to bytecode, but aren't almost all scripts interpreted? That is to say, doesn't the device interpreting the script need the "source," by definition?
It depends on how you interpret "open source".
Sure, you have the source code, but typically that isn't exactly what Open Source means. Usually open source refers to the licensing. To have something "open source" means that you are free to modify the source for any purpose, including redistributing it in many cases.
Just having the source doesn't make it open source in the general software sense. If the script is copyrighted, then it is technically "closed" except in cases where an Open Source license is explicitly given. You could modify it, but if you redistribute it without the author's permission you are in violation of their implied (or explicitly registered) copyright.
Open source is about licensing. A script can have any license the author (or copyright holder, such as an employer) wants. So the answer is "no".
It is the case that scripts are typically distributed in the same form that they're written - no compiled format. So you can see the source. That doesn't mean they're open source.
No.
"Open source" is not the same thing as being able to view the source code; open source licencing is about the legal right to derive works from that source code.
If you take someone else's work, modify and redistribute it without their explicit consent, then you are infringing upon their copyright, and breaking the law.
"Open Source" doesn't just mean that you have the source, it's also used to describe your legal right to redistribute the code either with or without modifications.
Based on the copyright & licensing, many scripts are not open source.
As noted by many, just because you have access to the source doesn't give you the right to do as you like with it.
You ask
Aren't almost all scripts interpreted? That is to say, doesn't the device interpreting the script need the "source," by definition?
No. Even in an interpreter, the source goes through several transformations before being interpreted. The form that is ultimately interpreted is often a sequence of instructions for a stack-based or register-based virtual machine; such instructions are usually called "bytecode". It is also possible to interpret internal trees relatively efficiently. Interpreters designed primarily for teaching purposes may use even less efficient schemes.
Some implementations permit you to take the internal form and write it to disk, from which it can be re-read and interpreted. The perceived advantages are usually
Programs load and run faster because the initial processing stages are performed once before the internal form is written, then reused over and over.
The internal form protects the source code from prying eyes.
The main disadvantage is that a typical internal form is usually less portable than source code, perhaps because of differences in byte order or word size.
In the particular case of Lua, the luac compiler will write bytecode to disk. It is seldom used because the bytecodes are not portable and because the compiler is already quite fast. In the particular case of World of Warcraft, they are actually encouraging people to use Lua to modify the interface and to customize the experience; they want everybody to share code and so keep it open source. WoW has over 10 million subscribers, and at least 5000 people have contributed code. so that's a half a percent of the user base that have contributed some code, which gives me happy thoughts about the future of programming as a profession.
To distribute a program for an interpreter, you have to send source (though not necessarily comprehensible source). This does not automatically mean that is is Open or Free in the way these terms are often used.
I seem to remember reading something in the game terms and conditions that requires addons to be licensed as open source but I can't seem to find it now so I may have been imagining it. In all practical cases they are though.
You can compile Lua and other scripting languages and obscure them in various ways. It's only more common--not more necessary--for the source to be open by default than is the case with other languages.

Are there some projects that rate RPG source? like software metrics? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I just wanted to know if you know of some projects that can help to decide whether the analyzed Source it is good code or bad RPG code.
I'm thinking on the terms of Software metric, McCabe Cyclomatic Number and all those things.
I know that those numbers are mere a hunch or two, but if you can present your management a point score they are happy and i get to modernize all those programs that otherwise work as specified but are painful to maintain.
so yeah .. know any code analyzers for (ILE)RPG ?
We have developed a tool called SourceMeter that can analyze source code conforming to RPG III and RPG IV versions (including free-form as well). It provides you the McCabe Cyclomatic Number and many other source code metrics that you can use to rate your RPG code.
If the issue is that the programs are painful to maintain, then the metric should reflect how how much pain is involved with maintaining them, such as "time to implement new feature X" vs "estimated time if codebase wasn't a steaming POS".
However, those are subjective (and always will be). IMO you're probably better off refactoring mercilessly to remove pain points from your development. You may want to look at the techniques of strangler applications to bring in a more modern platform to deliver new features without resorting to a Big Bang rewrite.
The SD Source Code Search Engine (SCSE) is a tool for rapidly search very large set of source code, using the langauge structure of each file to index the file according to code elements (identifiers, operators, constants, string literals, comments). The SD Source code engine is usable with a wide variety of langauges such as C, C++, C#, Java ... and there's a draft version of RPG.
To the OP's original question, the SCSE engine happens to compute various metrics over files as it indexes them, including SLOC, Comments, blank lines, and Halstead and Cyclomatic Complexity measures. The metrics are made available as byprooduct of the indexing step. Thus, various metrics for RPG could be obtained.
I've never seen one, although I wrote a primitive analyser for RPG400. With the advent of free form and subprocedures, it was too time consuming to modify. I wish there was an API that let me have access to the compiler lexical tables.
If you wanted to try it yourself, consider the notion of reading the bottom of the compiler listing and using the line numbers to at least get an idea of how long a variable lives. For instance, a global variable is 'worse' than a local variable. That can only be a guess because of GOTO and EXSR.
Lot of work.