Which minified processing js should I use for runtime - processing.js

http://processingjs.org/content/download/processing-js-1.2.1/processing-1.2.1.min.js
http://processingjs.org/content/download/processing-js-1.2.1/processing-1.2.1-api.min.js
Just observed that 2 javascript files are shipped in the distribution, what should I use for runtime functionality?

You could use either one, however the minified version is optimized for production because it saves bandwidth. If you plan on just using canned functionality from the script, the http://processingjs.org/content/download/processing-js-1.2.1/processing-1.2.1.min.js will do fine. However, if you wish to have the ability to change the code in the future, the api is what you want.
Jeffrey Kevin Pry

You want to use the non -api version. Looks like they added a seperate build which only includes the js API itself -- it doesn't include the parser which translates from processing into js.

Related

Laravel 4 and dojo toolkit AMD implementation how to?

is anyone ever tried implementing the dojo toolkit AMD with laravel 4, or could anyone please point me to a simple sample.
just a simple AMD implemetation on laravel?
What asset manager or the default is ok. how to use it with dojo?
Please help. thanks
For 1. I suggest you may try this Laravel 4 bootstrap suite it gives you RequireJS implementation out of the box.
For 2. You can use dojo with any asset manager you want, or even without it (although it is not a good way) - just by putting its .js files in your /public directory and including them as you do in usual html from inside your view templates. If you are using Blade templates make sure the template syntax is not colliding with your js syntax. If it is, then use #include of .php file with your js code section in your .blade.php view template.
Asset manager gives you a more elegant and correct way of doing the same thing. It maybe extremely useful if you are dealing with LESS or Coffee things to be compiled into regular JS and styles.
If you want advanced asset manager I would suggest your to look at /CodeSleeve/asset-pipeline on github - it's one of many asset managers for Laravel, but one the few keeping alive (take a look at basset or laravel-grunt options on github for instance).
Asset Pipeline makes a good job making asset management similar to the one in Rails. Here is an article on how and why to use it: http://culttt.com/2013/11/04/add-asset-pipeline-laravel-4/

Examples of Semantic Version Names

I have been reading about semver. I really like the general idea. However, when it comes to putting it to practice, I feel like I'm missing some key pieces of information. I'm not sure where the name of a library exists, or what to do with file variants. For instance, is the file name something like [framework]-[semver].min.js? Are there popular JavaScript frameworks that use semver? I don't know of any.
Thank you!
Let me try to explain you.
If you are not developing a library that you like to keep for years to come, don't bother about it.. If you prefer to version every development, read the following.
Suppose you are an architect or developer developing a library that is aimed to be used by hundreds of developers over time, in a distributed manner. You really need to be cautious of what you are doing, what your developers are adding (so interesting features that grabs your attention to push those changes in the currently distributed file). You dont know how do you tell your library users to upgrade. In what scenarios? People followed some sort of versioning, and interestingly, their thoughts all are working fine.
Then why do you need semver ?
It says "There should be a concrete specification for anything for a group of people to follow anything collectively, even though they know it in their minds". With that thought, they made a specification. They have made their observation and clubbed all the best practices in the world about versioning software mainly, and given a single website where they listed them. that is semver.org. Its main principles are :
Imagine you have already released your library with a version "lib.1.0.98", Now follow these rules for subsequent development.
Let your library is bundled and named as xyz and,
Given a version number MAJOR.MINOR.PATCH, (like xyz.MAJOR.MINOR.PATCH), increment the:
1. MAJOR version when you make incompatible API changes
(existing code of users of your library breaks if they adapt this without code changes in their programs),
2. MINOR version when you add functionality in a backwards-compatible manner
(existing code works, and some improvements in performance and features also), and
3. PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
If you are not a developer or are not in a position to develop a library of a standard, you need not worry at all about semver.
Finally, the famous [d3] library follows this practice.
Semantic Versioning only defines how to name your versions. It does not specify what you will do with your version number afterwards. You can put the version numbers in package names, you can store it in a properties file inside your application, or just publish it in a wiki. All those options are opened to discussion and not part of the problem space addressed by SemVer.
semver is used by npm and bower (and perhaps some other tools) for dependency management. Using semver it is possible to decide which versions of which packages to use if multiple libraries used depend on the same library.
As others have said, semantic versioning is a standard versioning scheme that tells your users which versions of your library should be compatible with each other, and which ones are not.
The idea, is to be able to give your users more confidence that it's safe to upgrade to a newer patch/version, because it's tried, tested, and true to being backwards compatible with the previous version (minor increments). That is, perceptively that's what your telling your users.
As far as tooling goes, I don't do much in javascript, but I typically let my build server handle stamping my assemblies etc with the correct version. I have a static major number I upgrade whenever I make breaking changes, a static minor number I upgrade everytime I add new features, and an auto-incrementing Patch number whenever I checkin bug fixes.
Especially if this is a javascript library you plan to share on a public repository of some kind (nuget, gem, etc) you probably want some for of automated packaging system, and you put the logic in there for specifying your version number (in the package meta data, in the name of the javascript file, which is typically the standard I've seen).
Take a look at sbt which is the Scala Build Tool. In it, we write dependencies like this:
val scalatest = "org.scalatest" %% "core" % "2.1.7" "test"
val jodatime = "org.joda" % "jodatime" % "1.4.5"
Wherein the operator %% means "the current version of Scala that you're building." Packaging things in this language generally create JAR files with the name like this <my project>_<scala version>_<library version>.jar which is quite handy for semantically naming things automagically. The % operator can be interpreted as "don't version this part."
That said, this resulted from the fact that the same library compiled to different Scala versions were not binary compatible with each other. So it was more as a result of, rather than a conscious design choice, the binary incompatibilities.

Compiling Sass with custom variables per request under Rails 3.1

In a Rails 3.1 app, one controller needs to have all its views compile whatever Sass stylesheets they might need per request using a set of custom variables. Ideally, the compilation must happen via the asset pipeline so that content-based asset names (ones that include an MD5 hash of the content) are generated. It is important for the solution to use pure Sass capabilities as opposed to resorting to, for example, ERB processing of Sass stylesheets.
From the research I've done here and elsewhere, the following seems like a possible approach:
Set up variable access
Create some type of variable accessor bridge using custom Sass functions, e.g., as described by Konstantin Haase here (gist). This seems pretty easy to do.
Configure all variable access via a Sass partial, e.g., in _base.sass which is the Compass way. The partial can use the custom functions defined above. Also easy.
Capture all asset references
Decorate the asset_path method of the view object. I have this working well.
Resolve references using a custom subclass of Sprockets::Environment. This is also working well.
Force asset recompilation, regardless of file modification times
I have not found a good solution for this yet.
I've seen examples of kicking off Sass processing manually by instantiating a new Sass::Engine and passing custom data that will be available in Sass::Script::Functions::EvaluationContext. The problem with this approach is that I'd have to manage file naming and paths myself and I'd always run the risk of possible deviation from what Sprockets does.
I wasn't able to find any examples of forcing Sprockets processing on a per-request basis, regardless of file mod times, that also allows for custom variable passing.
I'd appreciate comments on the general approach as well as any specific pointers/suggestions on how to best handle (3).
Sim.
It is possible. Look here SASS: Set variable at compile time
I wrote a solution to address it, I'll post soon and push it here, in case you or someone else still need it.
SASS is designed to be pre-compiled to CSS. Having Sprockets do this for every request for a view on a per request basis is not going to perform very well. Every request is going to have to wait for the compilation to be done, and it is not fast (from a serving-pages point of view).
The MD5 generation is within Sprockets, so if you are changing custom variables you are going to have to force a compilation on every single request to make sure that changes are seen because the view is (probably) not going to know.
It sounds as though this is not really in the sweet-spot of the asset-pipeline, and you should look at doing something more optimised for truly dynamic CSS.
Sorry. :-)

How to load uncompressed Dojo files from Google's CDN?

Is there a good way to bring in the uncompressed version of a dependency using dojo.require? I'm already requesting dojo.xd.uncompressed.js, but all of the dijits I'm using, etc. are being provided in uncompressed form. Is there a flag I'm missing somwehere? Thanks.
There's not really a way to do this for individual modules outside of base or prebuilt layers, that I know of. Individual modules don't have uncompressed versions built.
If you need to debug something in dijit, you might have some success manually loading http://ajax.googleapis.com/ajax/libs/dojo/1.6.1/dijit/dijit-all.xd.js.uncompressed.js via script tag (it's the uncompressed version of a layer including most if not all dijit widgets), but realize you should never load this layer in production, as it is most likely more than you need.
(edit) of course, the other option is to download the dojo source yourself (the -src.zip or -src.tar.gz at http://download.dojotoolkit.org/release-1.6.1/) and run it all on a local webserver.
Maybie you should use something like google.load("dojo", "1.6.1", {uncompressed:true}); ?
Can you show us your header ?
http://code.google.com/intl/fr/apis/libraries/devguide.html#dojo

Dojo vs Dijit - files to include or reference?

I've been reading O'Reilly book "Dojo - The Definitive Guid" but somethings are still not definitive to me.
They talk about "bootstrapping" and getting the dojo.css from the AOL CDN".
When I'm testing on my machine, should I use the CDN? Or should I wait and use that only when I deploy?
Secondly, the book talks about CDN for dojo, but not for dijit.
I'm developing on Google App Engine (GAE) - so having the 2000+ Dojo/Dijit files in my Javascript directory is a little annoying, because it slows down my upload to GAE each time.
Firebug is giving me this error:
GET http://localhost:8080/dijit/nls/dijit-all_en-us.js 404 not Found
GET http://localhost:8080/dijit/_editor/plugins/FontChoice.js 404 not Found
I downloaded the sample from here:
http://archive.dojotoolkit.org/nightly/dojotoolkit/dijit/themes/themeTester.html?theme=soria
and I'd like to "simply" get it to run on my machine under local google app engine (which is the localhost:8080 that you see in the URLs above).
I see this statement which probably is causing the second 404 above:
dojo.require("dijit._editor.plugins.FontChoice");
One other error:
cannot access optimized closure
preload("en-us") dijit-all.js (line 479)
anonymous("dijit.nls.dijit-all", ["ROOT", "ar", "ca", 40 more... 0=ROOT 1=ar 2=ca 3=cs 4=da 5=de 6=de-de 7=el 8=en 9=en-gb])dijit-all.js (line 489)
dijit-all.js()
dojo.i18n._searchLocalePath(locale, true, function(loc){\n
To continue for now, I'm going to try to copy the entire dijit library, but is there a solution short of that?
My current script includes look like this:
<script type="text/javascript" src="/javascript/dijit.js"></script>
<script type="text/javascript" src="/javascript/dijit-all.js" charset="utf-8"></script>
I got the dijit.js file by copying and renaming dijit.js.uncompressed.js to dijit.js.
You have a few options actually:
You could use the CDN for everything (though using the full source locally does give you better error messages). Google has them as well. Dijit is here: http://ajax.googleapis.com/ajax/libs/dojo/1.3.2/dijit/dijit.js FYI. This has many advantages in my opinion. User caching of the JS being the primary one.
Build a layered file. I think the O'Reilly book has a section about it but the PragProg book is better in this regard IMO. There's also this doc on dojocampus.org about building. This will trim down the files you need to upload to GAE and speed up your app loading. This is actually what I do in order to cut down on HTTP requests.
Keep doing what you are doing. :)
Regarding the errors you are seeing about 404 for en-us files are essentially harmless. Here's a better description.
You also might be reloading dijit files by using dijit.uncompressed.js and dijit-all.js and causing problems in the process...but I'm not sure about this one.
I just want to clarify that when using CDN all you need to include is the main Dojo script. The rest will be pulled in automatically when you dojo.require() them.
If for some (technical) reasons you don't want to use the X-Domain loader (CDNs use this type of loader), you can do a custom build (well-described in many places). After the build you copy only relevant files to your server. No need to copy all 2000+ tests, demos, unused DojoX projects, Dijits and so on.
During the build you will create a single minified file (or a few layers), which will include all Dojo JavaScript code you use. If you use Dojo widgets, their templates will be already inlined, so you do not incur hits for them. As part of the build CSS files are combined together and minified too. So literally in most cases you will have just two files: a Dojo layer, which includes everything + your custom code, and a CSS file. In more complex cases you may have more files, but usually we are talking about handful.
How to make sure that everything is in the build? Fire up your favorite network analyzer (Live HTTP Headers, Firebug, Fiddler2, or Charles Proxy would do fine) and see if you hit any files outside of your build. If you do — include them in the build, or try to figure out why they are requested, and eliminate these requests (some localization-related calls are fine).
Personally I would start with the CDN option — works well, no hassle, hosted by somebody else with fat pipes.
To address your first question, use the full source version locally for development, so that you can get clearer debug info which points to a legible line in source, rather than the single line the minified version is reduced to. Use the CDN for production.