SCons or CMake instead of qmake [closed] - cmake

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I need some advice in the following matter:
I have a QT project, which is currently set up to work nicely with qmake. However, due to expansions of the requirements and future directions of the project I need to change the build system of it, since the application will require some changes in the way it will be built.
Right now every source file is compiled into a pretty big executable, this is packaged (manually) and sent to download area. All is fine.
But the direction I am aiming to is to modularize the application in a way that each "feature" will be compiled into a shared library and the user (developer) will be able to choose the components he wants to compile. These "features" are placed in directories in the source tree (for example: query_builder, reverse_engineer, mysql_DB_support, version_managemen directories, etc...) and when the user builds the application he simply tells the build system to compile an application with query builder, and mysql, but no reverse engineer and in this case the build system adds the source files from the specified directory and creates a lib from it.
I also have other requirements such as:
windows build, linux build
optionally package build (deb, rpm)
support for QT and possibly QT5
multiple executables (GUI client, CLI client)
After some "market research" I have ended up with CMake and SCons as two possible systems I might use. I have some CMake experience, and some python experience, but no SCons yet.
But I don't know which one is best for my case, this is where I need your help. Could you elaborate which should I use? And if you consider that my requirements are achievable with qmake please let me know that too,
Cheers,
f.

There is no correct answer to this question, and it usually boils down to personal preferences , kind-of like vi versus emacs (the correct answer is vi, of course :)
You should study the pros and cons of each and evaluate how those fit with your requirements and needs.
I am partial to SCons, mainly because I cant stand the CMake syntax, but that is a personal preference. Here are some pros and cons of each as I see it:
CMake
Pros:
Similar to QMake, considering it is a makefile generator
CMake is widely used, so there is alot of references and help available
Has a GUI (I dont know it myself, based on Calvin1602 comments below)
Cons:
CMake has its own, invented syntax, which many feel (myself included) is not intuitive.
2 step build process, first create the Makefiles, then actually perform the compilation
Its next to impossible to read the generated Makefiles
SCons
Pros:
The syntax is Python, which is widely used and relatively easy to learn. (and Python is cool :)
The build process is one step, just execute SCons, and it compiles. No intermediate build files to generate or maintain.
In the past SCons was slower than CMake, but since then its much faster, probably as fast or faster than CMake since it doesnt have to generate makefiles.
Rich feature set, and many languages supported, whereas CMake is focused towards C/C++
Very accurate, implicit dependency system: no need to explicitly list dependent headers, libraries, etc. Explicit dependencies can be specified if needed.
An eclipse plugin is available. Eclipse also has plugins available for Python.
Has tools created for Qt projects to handle MOC and other related codegen as mentioned here.
Cons:
SCons may not be as widely used as CMake, but there is still plenty of support available.
Depending on the size of the project, SCons may use alot of memory, since it parses all of the build scripts and builds a dependency tree in memory before actually compiling anything. This does allow however for more accurate dependency checking.

Related

REBOL3 - what is the difference between the different branches?

What are the differences between the different Rebol 3 branches, especially with the new REN branch?
Is it the platforms they'll run on, the feature set, code organization, the C standard compliance?
This is an answer destined to become outdated, hence set to Community Wiki. This information is as of Sep-2015. So if updating this answer after some time has passed, please modify the date as well.
Binary download of Rebol3 from rebol.com
Last build was 5-Mar-2011 and pre-dates the open source release.
No GUI support, no HTTPS support, no serial port support, no UDP support, no smart console...
No 64-bit builds. Binaries are for Windows x86, OS/X (PPC or x86), Linux (x86 or PPC), FreeBSD x86.
While Rebol2 binaries are archived for many "esoteric" systems (BeOS, AIX, Windows DEC Alpha, QNX, Solaris...) similar binaries were not provided for Rebol3. The only "weird" build is for Amiga, and only an OS4 PowerPC Amiga. No successful builds of Rebol3 for Amiga emulators have been reported.
Open source release of Rebol3 on Github rebol/rebol
Open-sourcing was on 12-Dec-2012.
The rebol.com binary downloads were not rebuilt as part of this release. However, a community member (#earl here on SO) created a build farm at rebolsource.net that follows this GitHub master whenever it updates. Given that GitHub's rebol/rebol master hasn't been updated since March 2014, this dynamism is currently underused.
Building the source at time of release got an executable not distinguishable (?) in functionality from the builds on 5-Mar-2011. This suggests few changes to the source were made besides some cleanup and Apache-licensing edits to prepare for publication.
Minor patches and bugfixes were integrated sporadically, with most PRs sitting idle. Last PR accepted at time of writing was Mar 3, 2014, which is over a year ago.
The most noticeable "breaking" PR that did get approved was to repurpose the FUNCTION name. It was considered to be worth breaking the old arity 3 form to let the word be taken for the much more useful implementation as locals-gathering FUNCT. (This also brought Rebol in alignment with Red, whose FUNCTION is arity 2 and acts similarly.) FUNCT was kept around as-is for legacy code.
The most major non-breaking PR that was taken is probably not requiring blocks around IF, UNLESS, or EITHER bodies. This has been received well among those who know it's there, as fitting the freeform and non-boilerplate philosophy of the language. It allows some code constructs to get "prettier" and gives programmers more choice, while it doesn't seem to cause any more problems than anything else. It's certainly less of a speedbump than if [condition] [...], in fact it seems almost no one knows this feature got added, so it must not be biting anyone. (If anyone can bend ears over at Red to make sure it gets IF and IF/ONLY then that would be ideal.)
RETURN/REDO was removed. Rationale was that it permitted functions to effectively behave with variable arity, and that this was unnecessary and took terra firma away by no longer being able to predict a function's arity from its spec. Perhaps this stance warrants a second look...as Lisp users who are pressuring for the addition of Lisp-style macros seeming aren't worried about that very much. (Here in the StackExchange universe, this provoked a Programmers.SE question Would Rebol (or Red) benefit from Lisp-style Macros?, which hasn't gotten much in the way of answers yet.)
The fork by Saphirion: "Saphir"
Prior to the open-sourcing of Rebol, Saphirion AG had a special relationship with Rebol technologies. They had access to the source and were taking responsibility for most of the development work for Rebol3 GUI features. They also added several other things like HTTPS.
Saphir is available as a binary download from their website, but only provided for 32-bit Windows. There was at one time an experimental .APK for Android from Saphirion.
Some (but not all) of Saphir's source was released after the open-sourcing. Notable omissions were the android build and some Rebol3 code for encapping...a way of injecting compressed scripts and resources into binaries of the interpreter without needing to recompile it.
(Note: Under Apache2 license there is no requirement to release source code for one's derived work.)
"Community" Integration at Rebolsource on GitHub
With the GitHub rebol/rebol being held up on integrations, a fork at rebolsource/r3 was established to be a "community build" where work could be staged.
Rebolsource changes were conservative, seemingly aimed toward showing process for how GitHub's rebol/rebol might adopt changes "in the spirit in which Rebol was conceived" should that repository be delegated to the community. (For that spirit, see this.) Hence it integrated non-controversial bugfixes and tweaks, instead of large third-party cryptography libraries for implementing HTTPS. Also: no allowance for adding build dependencies besides a C compiler (no GNU autotools, for instance).
Binaries for the community build were produced on an as-needed basis for those requesting them who could not build it themselves.
Atronix Engineering's Rebol "3.0" at Github zsx/r3
Atronix is an industrial automation solutions provider that uses Rebol. How they do so is described in a video here by David den Haring, director of Engineering, and their ZOE software is built on their version of Rebol.
After the open sourcing, Atronix partnered with Saphirion to port the GUI to Linux. Atronix publishes their source publicly as it is developed, and David den Haring notes in the video above that they have only one proprietary component they developed (an industrial control driver). Other than that they are happy to share the source for all Rebol development they do.
Atronix integrated the 64-bit patches from Rebolsource, created a Windows 64-bit target, and offer up-to-date binaries of their development branch for Windows and Linux x86/x64, as well as Linux ARMv7.
Besides having the features of Saphir, the Atronix build added support for CALL with /INPUT, /OUTPUT, /ERROR. It also added a Foreign Function Interface, implementing LIBRARY!, ROUTINE! and STRUCT! for communicating with non-Rebol dynamic libraries. It brings in encapping support as well on Windows and Linux.
Rebol's "religion" was at times at odds with expedience, so the Rebol-based build process was replaced when needed by hand-edited makefiles and Visual Studio projects. The FFI library introduced a dependency on GNU autotools to build.
All Atronix builds include the GUI, so there is no "Core" build. And again, only Linux and Windows.
Ren-C
(Bias Note: This fork is the initiative #HostileFork started, knows the most about, and will speak most enthusiastically about.)
Ren-C started as an an extraction of a Core build out of Atronix's codebase. That gave it features like HTTPS, the enhanced CALL, and Foreign Function Interface to essentially all the platforms that Rebolsource was able to build for. Updates Jul/Sep-2015 Ren/C supports line continuations in the console, user infix functions, several bugfixes...
Ren-C makes large-scale changes and fixes fundamental issues in R3-Alpha, which are tracked on a Trello that provides more information. There is a new FAQ as a GitHub wiki. Critical issues like definitionally-scoped returns have been solved, with continuous work on other outstanding problems.
Though Atronix's R3/View required some additional dependencies, Ren/C pushed back to being able to be built with nothing besides a C compiler, and eliminated all handmade makefiles/projects.
Beyond Windows, Linux and Mac in both 32-bit and 64-bit variants, Ren/C has also been built for smaller players like HaikuOS and yes, even Syllable. This is interesting more for the demonstration of how broadly turnkey builds of the C89 code work (simply as make -f makefile.boot) as opposed to there being a particularly large userbase of those particular OSes!
From the point of view of language rigor, Ren/C is pushing on modern techniques. Although it can still build as C89, it can be built as C99 and C11 as well. It has also been verified to build as C++98 through C++14, and with some strategic modifications under #ifdef __cplusplus it can take advantage of modern C++ as a kind of static analysis tool over the C code. Warnings are raised, type errors all fixed up, and it's "const correct". The necessary changes were carefully considered to make Rebol's baseline C code not just more correct but cleaner and clearer source across the board.
From a point of view of C developers, Ren/C should be stable, organized, and commented enough for anyone who knows C to "modify with confidence" and try new features. That means being able to implement definitionally scoped returns (actually written, but not pushed), or try developing features like NewPath.
From a point of view of architecture, Ren/C is intended to not have an executable at all...but to be a library for embedding a Rebol interpreter into other programs. It is now the basis for Ren/C++, which was designed to anticipate working with Red as well.
From a point of view of testing, Ren/C intends to whip everything into shape for engineering rigor and zero bug tolerance. This means avoiding practices like zero-filling memory to obscure uninitialized memory accesses, using Address Sanitizer, Valgrind, and a test suite that can pass the highest settings on both.
While enabling all the extra functionality has made Ren/C's executable nearly twice the size of Rebolsource's, there's not yet been any audit to see how this can be brought down. It has been confirmed that there are duplicate copies of Zlib and PNG encoding/decoding--for instance (Saphirion included LodePNG, likely to work around a bug in the existing PNG because it was easier than fixing it...yet did not mothball the previous code). Also, being able to do a build which selectively integrates only the codecs you want to use is on the agenda.
Ren/C currently has the stakeholders from Atronix and Rebolsource participating in its development and direction, which strengthens the likelihood that it may evolve into "the" Rebol Core. It is now being linked in as the code backing Ren Garden, and using a similar approach it may be set up as the library used by Atronix's R3/View...then Rebolsource...and perhaps ultimately rebol/rebol itself.
The fork by Oldes
(Bias Note: this edit is added 28-Feb-2019 by Oldes himself)
Forked from the community branch. Main focus on keeping the code close to the original Carl's release without blindly taking everything from Atronix/Saphirion but still trying to pick-up the good things from these branches slowly.
Not like Ren-C, this version is not trying to introduce new syntax, but rather be closer to the original Rebol2 and new Red language

Tools for upstream maintainers? For testing before release (Debian, etc.) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I develop a library that is used by other software. Typically this library ends up packaged in Debian, Fedora, etc., and its "reverse-dependencies" also end up packaged and using it.
So, I guess this makes me an "upstream maintainer." I simply use autotools to produce a tarball, and packagers then use that to produce .deb files, etc. Now, something that has bothered me for quite some time is the disconnect between maintainers and packagers. I feel like every time I do a release, even if it is simply a bugfix release, I am potentially causing headaches for everyone down the chain.
Possible problems:
I introduced a bug that wasn't caught in testing, even though I tried extensively to test various configurations -- I don't have unlimited testing resources and it is a small library so I am mostly on my own although there are 1 or 2 other interested people who help out, but generally only test on one platform.
I forgot to bump the version number, causing confusion
I did bump the version number but forgot to bump the SO version (you know, the thing that specifies API/ABI compatibility, and is independent from the software release version)
I made a small change but accidentally caused an API incompatibility without thinking (e.g. made something "const" that should have been all along, didn't realize it would break people's code)
I made a small change but accidentally caused an ABI incompatibility -- e.g., changed a constant in a header file, wasn't thinking and forgot that this would be "baked in" in software compiled against a previous version
I have done pretty much all of these things at some time or other in the past. Due to these previous mistakes, these days I probably spend more time testing than actually developing, and still end up making mistakes. The mistakes are often not that bad, after all people understand, mistakes happen, but they sometimes cause people to drop using the library, without even talking to me or communicating on the mailing list, which sucks -- if those people were so interested, it would be cool if they had helped test before I published a release -- but anyways, you get the idea.
So, rather than just compiling and running the unit tests, my testing process now involves some pretty extensive steps. In particularly, I am now using "apt-cache rdepends" to find software that uses my library, and I install it and switch the binary out to test for ABI compatibility. Then, I uninstall it, and "apt-get source" it, and compile it against the new version to test for API compatibility.
This kind of testing involves,
understanding other peoples software and figuring out where and how it exercises my code
compiling other peoples software, including figuring out their other dependencies and how to get everything working -- for large projects this can be a nightmare.
some projects using my software are actually plugins for other projects, meaning I have to additionally get the host program working
many projects using my library are GUI-oriented, so I have to navigate and learn some software I don't even know or use, and then guess when I have got it to a place where it is actually calling out to my library
my library works on Linux, Windows, and OS X, and often I don't have enough machines and operating systems around to test on. For example, a huge problem with my last release was a bug that only showed up in Linux on x86_64. I had tested on Linux i386, and OS X 64-bit, but somehow these platforms didn't show the bug, it was particular to the Linux-64-bit combination which I had neglected testing because I didn't have the right hardware and assumed I'd covered enough ground.
As you can imagine this is not a light task, and makes for huge delays before publishing a given release, delaying the dissemination of bugfixes, etc. The worst thing is that my project is not even a large library, and is a hobby project of mine, so all of this feels like huge overhead just for something I do in my spare time. I'd rather be developing features than just defending against my own potential mistakes for every little change I make. But, it currently has 42 rdepends listed in Ubuntu, to give you an idea, and I'm proud that it is useful to other people so I want to be able to develop and improve it without worrying so much about breaking things for everyone.
My question is, how can I improve the efficiency of this testing process? Are there for example any tools that will automatically compile "rdepends" packages against a new version of my library and give me a report? Or somehow download compiled binaries of rdepends and test loading them against my ABI without actually necessarily requiring me to navigate the GUI of some unknown software?
how can I improve the efficiency of this testing process?
The main problem is communication, apart the fact that you lack scripts that automate the process. You can do pre-releases of your packages, mailing the distributions that your library supports, etc. or instead of maintaining the packages yourself insert them into some mayor Distro and let some experimented maintainer do the stuff.
You can always break people stuff, just don't do it so frequently. Remember that people need stability in some certain sense so you may document very well each change so people using your library can't say you didn't tell them.
About tools... you should find your own pace. Maybe some buildbots (AFAIK some projects lend build bots), maybe script automatizing the process you build stuff, etc. etc. etc., did I said etc.? The problem is too broad and there are effectively too many solutions that makes any suggestion non-viable. You may want to check https://softwareengineering.stackexchange.com/q/150466/104338 as "some" methods but, again, you should find your own pace.

What is a native build environment?

I am simply reading information off the interwebs, currently the cmake about page, and I need the information to fill in the gaps, it helps to see the big picture.
Surely the answer is straightforward, I hope. What is a native build environment?
Context: I need to know how to build software on my machine (CodeBlocks, etc), why I need to do this, the advantages of doing this, etc. But first, I need to know every piece of jargon I come across, and I could not find any explanations about exactly what a "Native build environment" is, although I can speculate to some degree.
"Native" as in "runs directly in the host operating system" and not "runs in a virtual machine or emulator."
The particular point that CMake's about page is trying to convey is the manner in which CMake achieves cross-platform functionality: specifically not by virtualizing, but by cooperating/collaborating directly with the host system, and the "normal" ways the host system is used to doing things.
Is the build environment then just the directory holding all the garbage needed for a compiler to build the software then?
That's an oversimplification – there's nothing to say that it's a single directory – but more or less, yes. The term is not jargon, it literally means "the state of the world" (aka, environment) needed for the build.
What would you call the other thing then, Non-native?
Sure, or virtualized, or emulated, or whatever other intermediate layer has been added.
Why do we need the distinction as well?
Why not? It's useful to have a concise, clear, simple term so we can communicate precisely and with minimal confusion and ambiguity.
Why 'non-native'? If you haven't already figured this out - there is something called cross-compilation.
Simply put, if I don't have access to target hardware (or an equivalent virtual machine) on which the software needs to run, how do I develop this software on my host and package it to run on that target?
Cross-compilation addresses this by providing necessary tools that perform a translation (or other important low-level stuff) to give you the final software. Such an environment to develop software is called non-native.
Well, I believe we need the term to state the technique.

Cocoa without XCode [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I would like to develop Mac applications, but don't want to use XCode. I have many reasons...
It's VERY slow...
It's complicated...
The Interface Builder seems like cheating and is not as satisfying. (I know, old school)
The whole developer tools set takes a lot of space and takes a long time to download (meanwhile slowing the rest of my computer down)
I know it's possible because I have seen some scripts compiled with gcc. Are there any tutorials? Are there any tips? I know how to run it, but I just need help learning how to use it without XCode making code for me. Is this a good plan, or is this just destined for failure?
AppCode.
AppCode is an IDE for Objective-C developers building native Cocoa
apps for MacOS X or iOS who strive for higher coding productivity and
better code quality.
EditRocket.
EditRocket can compile and execute Objective-C programs. EditRocket
uses the gcc compiler to compile Objective-C programs
GNUstep.
GNUstep provides a robust implementation of the AppKit and Foundation
libraries as well as the development tools available on Cocoa,
including Gorm (the InterfaceBuilder) and ProjectCenter
(ProjectBuilder/Xcode).
THE COCOTRON
The Cocotron is an open source project which aims to implement a
cross-platform Objective-C API similar to that described by Apple
Inc.'s Cocoa documentation. This includes the AppKit, Foundation,
Objective-C runtime and support APIs such as CoreGraphics and
CoreFoundation
.
Take a look at build and run a Cocoa Mac application on the command-line post.
alternatives to XCode for iPhone development? (OR: how to make XCode suck less?).
I'm not sure what code you think Xcode is generating for you, but if you want to use another IDE then you're free to. Xcode includes all the standard UNIXy command line tools (though, as of 4.3 you have explicitly to make them available by launching Xcode exactly once and ticking a box in the settings), so you'd use standard GCC methods.
Besides the observation given e.g. here that you'll want to link against the Foundation framework, there's really not much to say.
For the record, the interface designer doesn't generate any code and is therefore no more 'cheating' than using a paint package to draw your graphics.
or is this just destined for failure?
Probably. Apple is making OS X and iOS development very tightly tied to the use of Xcode, particularly if you are intending to submit apps to either store. You'll spend a lot of time working out how to do things the non-Xcode way.
Looking at your points in turn:
More than using x many different tools to achieve the same thing?
See 1.
You don't have to use interface builder if you don't want to, but your given reason ("cheating") is nonsensical.
Most of that is documentation, which you will need anyway. It is quite nicely integrated into the editor if you use Xcode.
you are going to waste more time massaging your custom environment than you would waste by just drinking the kool-aid.
It is reasonable to use some other text editor and use xcode for editing your build environment, then you would be free to execute builds from the command line.........

Difference between a script and a program? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What is the difference between a script and a program? Most of the time I hear that a script is running, is that not a program? I am bit puzzled, can anybody elaborate on this?
For me, the main difference is that a script is interpreted, while a program is executed (i.e. the source is first compiled, and the result of that compilation is expected).
Wikipedia seems to agree with me on this :
Script :
"Scripts" are distinct from the core
code of the application, which is
usually written in a different
language, and are often created or at
least modified by the end-user.
Scripts are often interpreted from
source code or bytecode, whereas the
applications they control are
traditionally compiled to native
machine code.
Program :
The program has an executable form
that the computer can use directly to
execute the instructions. The same
program in its human-readable source
code form, from which executable
programs are derived (e.g., compiled)
I take a different view.
A "script" is code that acts upon some system in an external or independent manner and can be removed or disabled without disabling the system itself.
A "program" is code that constitutes a system. The program's code may be written in a modular manner, with good separation of concerns, but the code is fundamentally internal to, and a dependency of, the system itself.
Scripts are often interpreted, but not always. Programs are often compiled, but not always.
Typically, a script is a lightweight, quickly constructed, possibly single-use tool. It's usually interpreted, not compiled. Python and bash are examples of languages used to build scripts.
A program is constructed in a compiled language, like C or C++, and usually runs more quickly than a script for that reason. Larger tools are often written as "programs" rather than scripts - smaller tools are more easily developed as scripts, but scripts can get unwieldy as they get larger. Application and system languages (those used to build programs/applications) have tools to make that growth easier to manage.
You can usually view a script in a text editor to see what it does. You can't do that with an executable program - the latter's instructions have been compiled into bytecode or machine language that makes it very difficult for humans to understand, without specialized tools.
Note the number of "oftens" and "usuallys" above - the terms are nebulous, and cross over sometimes.
See:
The Difference Between a Program and a Script
A Script is also a program but without an opaque layer hiding the (source code) whereas a program is one having clothes, you can't see it's source code unless it is decompilable.
Scripts need other programs to execute them while programs don't need one.
A "program" in general, is a sequence of instructions written so that a computer can perform certain task.
A "script" is code written in a scripting language. A scripting language is nothing but a type of programming language in which we can write code to control another software application.
In fact, programming languages are of two types:
a. Scripting Language
b. Compiled Language
Please read this:
Scripting and Compiled Languages
Scripts are usually interpreted (by another executable).
A program is usually a standalone compiled executable in its own right (although it might have library dependencies), consisting of machine code or byte codes (for just-in-time compiled programs)
There are really two dimensions to the scripting vs program reality:
Is the language powerful enough, particularly with string operations, to compete with a macro processor like the posix shell and particularly bash? If it isn't better than bash for running some function there isn't much point in using it.
Is the language convenient and quickly started? Java, Scala, JRuby, Closure and Groovy are all powerful languages, but Java requires a lot of boilerplate and the JVM they all require just takes too long to start up.
OTOH, Perl, Python, and Ruby all start up quickly and have powerful string handling (and pretty much everything-else-handling) operations, so they tend to occupy the sometimes-disparaged-but-not-easily-encroached-upon "scripting" world. It turns out they do well at running entire traditional programs as well.
Left in limbo are languages like Javascript, which aren't used for scripting but potentially could be. Update: since this was written node.js was released on multiple platforms. In other news, the question was closed. "Oh well."
script: it contains set of "scripting language" instructions which controls, runs other system programs, applications also it can be scheduled.
Program: it contains set of instructions, which performs certain task upon compilation of the program with the compiler.
According to my perspective, the main difference between script and program:
Scripts can be used with the other technologies. Example: PHP scripts, Javascripts, etc. can be used within HTML.
Programs are stand-alone chunks of code that can never be embedded into the other technologies.
If I am wrong at any place please correct me.I will admire your correction.
A framework or other similar schema will run/interpret a script to do a task. A program is compiled and run by a machine to do a task
IMO
Script - is the kind of instruction that program supposed to run
Program - is kind of instruction that hardware supposed to run
Though i guess .NET/JAVA byte codes are scripts by this definition