After building an application using Seaside I managed to push my Pharo image code to GitHub using iceberg. I was able to clone it into a new Pharo image on a new machine. However, loading the package into the image seems to generate an error requesting some seaside dependencies. I still don't understand the concept of adding a dependency to a Pharo image. Could one explain to me how to go about doing it? I need it for code deployment and collaboration.
I'm sorry, I don't understand completely your question. If you mean how can you define a project (which can have dependencies, etc.), something like you would be doing with, for instance, maven, you need to define a Baseline.
A baseline is a class (and a package) that you need to define and save with your sources. Take this one as example: https://github.com/estebanlm/logger/blob/master/src/BaselineOfLogger/BaselineOfLogger.class.st
(this is the smallest example I found, and the project itself is not very interesting).
I will explain it in parts:
You have a class named BaselineOfLogger that inherits of BaselineOf and is placed in a package with the same name of the baseline (this is important, for the tools to find it later).
You define a method tagged with the pragma baseline (pragmas are a little bit like annotations):
BaselineOfLogger >> baseline: spec [
<baseline>
spec for: #pharo do: [
self beacon: spec.
spec package: 'Logger' ].
]
as you can see this method defines a "spec" for Pharo:
- it will load beacon project (we'll see this later)
- it declares it will load the package Logger.
The method beacon: is defined like this:
BaselineOfLogger >> beacon: spec [
spec
baseline: 'Beacon'
with: [ spec repository: 'github://pharo-project/pharo-beacon/repository' ]
]
and as you can see, it points to another project (and another baseline).
Now, since you need Seaside, your Baseline could look something like this:
BaselineOfMyProject >> baseline: spec [
<baseline>
spec for: #pharo do: [
spec
baseline: 'Seaside3'
with: [
spec repository: 'github://SeasideSt/Seaside:v3.2.4/repository' ]
spec package: 'MyPackage' ].
]
Finally, in your image, to load you will do something like this:
Metacello new
repository: 'github://yourname/yourprojectname/src';
baseline: 'MyProject';
load.
This is more or less like that. But please note than declaring dependencies is a complicated matter (no matter the language you use) and the example I made will cover just the very basics.
Related
How can find modules that have been installed locally, and which I can use in a Raku program?
Assume that we have three distributions: Parent, Brother, Sister. Parent 'provides' Top.rakumod, while Brother and Sister provide 'Top::Son.rakumod' and 'Top::Daughter.rakumod', respectively. Brother and Sister have a 'depends': 'Top' in their META6.json.
Each distribution is in its own git repo. And each are installed by zef.
Suppose Top is set up as a class with an interface method, perhaps something like: multi method on-starting { ... }, which each sub-class has to implement, and which when run, provides the caller with information about the sub-class. So both Top::Brother and Top::Daughter implement on-starting. There could also be distributions Top::Aunt and so on, which are not locally installed. We need to find which are installed.
So, now we run an instance of Top (as defined in Parent). It needs to look for installed modules that match Top::*. The place to start (I think) is $*REPO, which is a linked list of repositories containing the modules that are installed. $*REPO also does the CompUnit::Repository role, which in turn has a 'need' method.
What I don't understand is how to manipulate $*REPO to get a list of all the candidate modules that match Top::*, along the whole of the linked list.
Once I have the list of candidates, I can use ^can to check it has a on-starting method, and then call that method.
If this is not the way to get to a result where Top finds out about locally installed modules, I'd appreciate some alternatives to the scheme I just laid out.
CompUnit::Repository (CUR) has a candidates method for searching distributions, but it does not allow for searching by name prefix (since it also does fast lookups which require the full name to get its sha1 directory/lookup). For a CompUnit::Repository::FileSystem (CURFS) you can call .distribution to get the distribution is provides, and for CompUnit::Repository::Installation (CURI) you can call .installed to get all the distributions it provides:
raku -e ' \
say $*REPO.repo-chain \
.grep(CompUnit::Repository::FileSystem | CompUnit::Repository::Installation) \
.map({ $_ ~~ CompUnit::Repository::FileSystem ?? $_.distribution !! $_.installed.Slip }) \
.grep(*.defined) \
;'
If you want to match namespaces you would need to then grep on the distributions name or the name of its modules:
my #matches = #distributions.grep({ $_.meta<provides>.keys.first({.starts-with("Top::")}) });
This way of handling things can be seen in the Pluggable module (which I'd suggest using if you also want to load such code)
Of course you explicitly asked only for installed modules, but ignoring CURFS doesn't make any sense -- as an application developer it isn't supposed to matter where or how a module is loaded. If someone wants to use -I ./foo instead of installing it there isn't a good reason to ignore it. If you insist on doing this anyway it should be obvious how to change the example above to accommodate.
Once I have the list of candidates, I can use ^can to check it has a on-starting method, and then call that method.
Having a list of candidates doesn't let you do anything other than inspect the meta file or slurp the source code of various files. At the very least you would load whatever module you wanted to call e.g. .^can first, and there is going to be a few steps involved in that, which the distribution object can't be used directly to load with (you extract the full name from it and use that to load it) -- so again I'd suggest using Pluggable
I'm trying to get my head around Webpack 4 for a medium-to-large scale (MVC) website.
For the solution, I want to use the following, base vendor scripts/styles:
jQuery vLatest minified version
Bootstrap, but only the grid, no javascript or anything else
The site consists on several templates different from each other. Some might have an image gallery where I want to use Owl Carousel vLatest and so on, so forth.
As I've understood, the vendor bundle should only contain the scripts/styles that is used across the entire site, so i.e., the Owl Carousel script/styles should not be a part of the vendor scripts since it's only used for one, maybe two specific templates.
I've installed jQuery and Bootstrap via npm so they're in the node_modules folder. Question is: how do I tell Webpack to use the minified version of jQuery in the vendor bundle? And how do I tell it to use only the grid component from Bootstrap? And what about the other third party scripts/styles, should they be included as their own entry?
My webpack.config.js entry file looks like this:
entry: {
'mysite.bundle.css': './scripts/webpack-entries/mysite.styles.js',
'mysite.bundle.js': glob.sync('./scripts/mysite/*.js'),
'vendor.bundle.js': [
'./node_modules/jquery/dist/jquery.min.js'
],
'vendor.bundle.css': [
'./node_modules/bootstrap/scss/bootstrap-grid.scss'
],
}
What feels weird about this is, that I could just aswell reference the jquery.min.js directly on my view and import bootstrap-grid.scss directly in my .scss files. Same could be said with the Owl carousel (and other vendor scripts)
Also, if I just do this: 'vendor.bundle.js': ['jquery'] the entire non-minified jQuery library is loaded rather than the minified version.
How exactly do you work with Webpack and NPM this way? :-)
Thanks in advance.
You can use { resolve } to configure aliases:
{
resolve: {
alias: {
'jquery': require.resolve('jquery/jquery.min.js')
}
}
}
However, I would caution first to focus on getting a viable build that's suitable for development and then enhance the configuration as needed to optimize for production. For example, during development you want to include all the sources with their entirety with good source maps. When you get to the point of publishing, use something like Environment Variables to introduce a flag that will enforce the necessary configuration.
No, it's not necessary to create entry points for particular vendor sources. This is reminiscent of the past practices. You should create individual entry points to logically split your large codebase into distinct bundles, like: the public web, the administrative application, the customer application, should you have the need to do so.
Also, don't spend too much time creating entrypoints to group vendor sources and such. Write your modules as you would, from the perspective of a developer, require from them what they depend on and then use webpack { optimize.minimizer }, other minification plugins and it's dependency graph heuristics to create necessary chunks using { optimize.splitChunks }.
Short answer is, and this has been true for webpack for a long time: do not change the way you write and organize sources to satisfy webpack. It's polished and sophisticated enough that it will accommodate to your style of authoring.
I'm trying to understand what is safe vs. not safe with respect to the Eclipse plugin lifecycle.
Background
Something in the Eclipse/RCP/OSGI framework allows for circular dependencies between bundles by allowing bundles to provide extension points. If bundle X provides an extension point, Bundle Y may both depend on bundle X, and provide an extension that implements an interface or extends a class known to X, and make that extension available to bundle X.
Then there's the promise of activators: as far as I understand, it is promised that your activator's start(BundleContext) method will be called before any class in your bundle is made available to any other bundle, and that your dependencies' start(...) methods will have been called before yours.
Limitations/Possible Contradictions
Now, I'm ready to describe my conundrum: I would like to retrieve all the providers of a specific extension point as soon as possible; the easy way to do this would appear to be in the activator of my bundle.
However, if what I've described about the promises that the Eclipse/RCP/OSGI framework makes is true, then I'm pretty sure it shouldn't be possible for me to do that during activation:
Either (1) I'll have a reference to classes provided by one of my dependencies before their start(...) method has been called, or (2) My dependency's start(...) method will have to be called before mine, or (3) No violations will occur, but I'll retrieve zero extensions because the plugins that depend on me couldn't be started before me, so their implementations of my extension point are not yet available.
Why I Need Extensions at Startup
My challenge is that I need to load some data ASAP after the startup of my plugin, but I need to ensure that my extensions are loaded first, because the extensions in question are extensions to the data format of the data that I need to load; if I load the data first, it fails or becomes corrupted.
I'm also wondering whether my picture of the Eclipse plugin lifecycle is correct, because, despite searching for discussions of the plugin lifecycle, I haven't come across any warnings about its limitations; I'm fairly certain it must be possible to do things wrong and create serious problems, and I'd like to understand under what circumstances things would go wrong so I can avoid creating problems.
The extension point registry accessed by the IExtensionRegistry interface will tell you about extension points without starting any of the plugins involved.
IExtensionRegistry extReg = Platform.getExtensionRegistry();
In the registry for an extension point you will have a number of IConfigurationElement entries describing the individual extensions declared by plugins. It is only when you call the createExecutableExtension method of this interface that the the contributing plugin is started.
Note: A plugin's activator start method is not normally run until Eclipse needs to run some other code in the plugin - it does not run at Eclipse startup unless you force it too.
First, we're new to Dojo and are free do things the "new" way, so I'm mostly ignoring the pre-1.7 sections of the documentation. Still, I'm coming up confused when comparing various articles, documents, and sample scripts.
The bottom line is that I can't find a straightforward write-up on how to create and deploy a custom build for Dojo. Most specifically, which .js and .css files we need to deploy. There's a lot of documentation on creating a build, but none I've found on deploying.
I eventually gathered that building everything into a single dojo.js is a reasonable practice for mobile, and that I simply have to extract that one file out of the build directories and deploy it to my server, but then I get missing CSS references, and it doesn't seem like trial-and-error is the correct way to resolve those.
Here's our specific, current case:
<script type="text/javascript">
require(
// deviceTheme to auto-detect device styles
[
"dojox/mobile",
"dojox/mobile/parser",
"dojox/mobile/deviceTheme"
]);
</script>
Here's the build profile:
dependencies = {
stripConsole: "normal",
layers: [
{
name: "dojo.js",
customBase: true, // prevent automatic inclusion of dojo/main
dependencies: [
"dojox.mobile.parser",
"dojox.mobile",
"dojox.mobile.deviceTheme"
]
}
],
prefixes: [
[ "dijit", "../dijit" ], // example included; not clear why
[ "dojox", "../dojox" ]
]
}
(Executed by the dojo-release-1.7.2-src\dojox\mobile\build\build.bat script.)
So I guess the specific questions are:
Which CSS files do I deploy for this case?
How do I know in general which files, including CSS files, to deploy?
Is there a good, current tutorial that I'm missing?
Are the existing scripts up-to-date? For example, why does mobile-all.profile.js use dependencies= instead of the profile= that the 1.7 build tutorial describes?
Which CSS files do I deploy for this case?
This is conditional, if a page uses a specific module and it has its own css-rules, include them.
This is not the way, but starting out with dojo.css (base, reset), dijit.css (base, layouts and more), nihilo.css (example theme) and android.css (example theme) would make a good foundation
If you want to use a theme, include the 'basename', e.g. dojox/mobile/themes/iphone/iphone.css
If for instance iphone.css does not '#import' some exotic widget you use, include the css which is delivered by the widget itself (as example, dojox/widget/ColorPicker/ColorPicker.css). Docs for each widget _should make not of this.
How do I know in general which files, including CSS files, to deploy?
There is no harm in uploading all files, the loader will decide which to get from cached-build, which to leave alone and what to download during runtime.
deploy everything.. when you build a 'action=release' all files are minified within the prefixes defined dojo (defaults, allways), dijit, dojox and any packages you add.
the build results in optimized toolkit-tree, deploy all or you might face a situation where a conditional include (Toggler.js for instance, or acme.js selector incl etc) is missing.
Is there a good, current tutorial that I'm missing?
By 'thumbrule', any 1.6+ docs are stable, but they mostly say the same. As you start a profile, it may get a bit trial and error (the sequence of inline-html inclusion of script files is of outmost importance). What you posted looks good, though think about if customBase:true is nescessary.
Make sure you have seen this for versions 1.6-1.7 and version 1.8
Are the existing scripts up-to-date? For example, why does mobile-all.profile.js use dependencies= instead of the profile= that the 1.7 build tutorial describes?
You can use the existing ones, out the box. The builder however is changing, moving up against the much debated 2.0 release. For now, the new schemes are allowed as well as the regular ones. But in fact the variable name can be foo or bar, only the variable's value is put to use. The file must 'return' a single object.
As i understood, the reason is that CommonJS Packages/1.0 is the new bible for AMD and builder 1.7+. It has allways been the 'thumb' scheme the package hierachy has followed however - but the syntax will most likely get more and more strict the closer we move to 2.0. (see if you can hook up with snover on #dojo # irc.freenode.org)
Is this possible using v1.6.1? Due to the Xdomain configuration of my client's dojo deployment, it is necessary to execute a new build each time dev code changes. As you can imagine, this is a huge time waster.
From everything I can see there is no way to exempt the core from the build playing by DOJOs rules. So I am wondering if there is a way to break the rules (modifying the Rhino calls?) to get to where I need to be.
A couple thoughts.
You can avoid building most of dojo (dijit, dojox) but I imagine you already know that
This restriction you are facing seems odd. Isn't there some way you can just upload the specific JS files you are editing during development?
Maybe if you give more details on the client setup, I can help you brainstorm a way around this problem.
Update
Here's what I think you need: Customize Dojo Base in Build. This allows you to specify particular bits of the dojo base to include.
This works in pre-1.7, so you should be good.
Appears to be exactly what you want:
layers: [
{
name: "dojo.js",
customBase: true,
dependencies: [
]
},
// ... remainder of profile
]
This will give you the absolute bare minimum of dojo (which you still don't need for your dev scenario, but which will drastically reduce the amount of files processed).
For other use cases, you can use the dependencies attribute to add in other stuff from dojo core.
Update 2:
Here's a couple build-time optimization suggestions:
1) Don't intern strings, and don't compress, when in dev.
There are arg values you can pass to avoid these time-consuming steps (example is for ant build):
<arg value="internStrings=false"/>
<arg value="layerOptimize=false"/>
2) Build to a ram disk to speed copying of files
Dojo supports mix-and-match - so you can use xdomain and/or custom build for the stuff that does not change - and use regular dojo.require for the JS/widget that is changing often - and then just push that JS to see the change without a new xdomain/custom build/deployment
You can explore using local modules with xdomain build. Also, Dojo allows using multiple custom builds - so you can do a stable custom build for the widgets that don't change so much and another smaller build for code that is changing frequently.
Why not use dojo 1.7, load asynchronously, and rely on it's legacy support? http://livedocs.dojotoolkit.org/loader/amd