Running multiple JScript files in a separate process - scripting

I am looking for a way to launch multiple scripts in a separate process from my main script, but in such a way that they can access copies of variables I've declared. Consider the following example:
Serializable.js:
// Represents serializable data.
function Serializable() { /* ... */ }
SecondaryScript.wsf
// Serializable is not defined here!
FakeMoney.prototype = new Serializable();
function FakeMoney(amount) { /* ... */ }
MainScript.wsf:
<job>
<script language="JScript" src="Serializable.js"></script>
<script language="JScript">
var WshShell = new ActiveXObject("WScript.Shell");
// `Serializable` is defined here...
var oExec = WshShell.Exec("cscript SecondaryScript.js");
WScript.Echo(oExec.Status);
</script>
</job>
Is there a way to define Serializable for the code in SecondaryScript.js while running SecondaryScript.js in a separate process?

You could make a different wsf with Serializable.js and SecondaryScript included and run that from the first wsf file using exec.
You can also try using events to communicate variables between scripts using the WScript object, or the WshRemote object (locally) to get script statuses.
In fact, this seems to match the best what you're explaining:
Running Scripts Remotely on MSDN. Again, you can do this locally to meet your goal.

Related

How to set Attribute to PDO connection in Codeigniter

How to set attributes (PDO::ATTR_ERRMODE) on the PDO database handle in Codeigniter?
I think a better option is to use a MY_Model (which you then extend and this is available then across the application) and define something like this in the construct:
$this->db->conn_id->setAttribute(PDO::ATTR_ERRMODE,PDO::ERRMODE_EXCEPTION);
Note conn_id allows you to access the main PDO object.
There are two ways:
1. The lazy (hacky) way
Add to the following code into system/core/database/drivers/pdo/pdo_driver.php (in CI 3):
public function db_connect($persistent = FALSE)
{
$this->options[PDO::ATTR_PERSISTENT] = $persistent;
// Added code start
$this->options[PDO::ATTR_ERRMODE] = PDO::ERRMODE_EXCEPTION;
// Added code end
try
{
return new PDO($this->dsn, $this->username, $this->password, $this->options);
...
}
2. The right way
Extend Database Driver and add the same line
Note: If you will set PDO::ERRMODE_EXCEPTION in Codeigniter it will show exception errors even in Production environment.

Deploying SSRS RDL files from VB.Net - Issue with shared datasources

I am currently developing a utility to help automate our report deployment process. Multiple files, in multiple folders, to multiple servers.
I am using the reportservice2010.asmx web service, and I am deploying my files to the server - so most of the way there.
My issue is that I have shared data sets and shared data sources, which are deployed to individual folders, separate to the report folders. When the deployment occurs the web service looks locally for the data source rather than in the data source folder, giving an error like:
The dataset ‘CostReduction’ refers to the shared data source ‘CostReduction’, which is not
published on the report server. The shared data source ‘CostReduction’ must be published
before this report can run.
The data source/set has been deployed and the report functions correctly but I need to suppress these error messages as they may be hiding other actual errors.
I can hard code a lookup that checks if the data source/set exists and manually filter them via that, but it seems very in-efficient. Is there any way I can tell the web service where to look for these files or another approach that other people have used?
I'm not looking at changing the reports so the data source is read from
/DataSources/DataSourceName
as there are lots of reports and that's not how our existing projects are configured.
Many thanks in advance.
I realize you are using VB, but perhaps this will give you a clue if you convert it from C# to VB, using one of the translators on the web.
Hopefully this will give you a lead in the right direction.
When All the reports in a particular folder, referred to here as the 'parent folder', all use the same Shared Data source, I use this to set all the reports to the same shared Data Source (in this case "/DataSources/Shared_New")
using GetPropertiesSample.ReportService2010;
using System.Diagnostics;
using System.Collections.Generic; //<== required for LISTS
using System.Reflection;
namespace GetPropertiesSample
{
class Program
{
static void Main(string[] args)
{
GetListOfObjectsInGivenFolder_and_ResetTheReportDataSource("0_Contacts"); //<=== This is the parent folder
}
private static void GetListOfObjectsInGivenFolder_and_ResetTheReportDataSource(string sParentFolder)
{
// Create a Web service proxy object and set credentials
ReportingService2010 rs = new ReportingService2010();
rs.Credentials = System.Net.CredentialCache.DefaultCredentials;
CatalogItem[] reportList = rs.ListChildren(#"/" + sParentFolder, true);
int iCounter = 0;
foreach (CatalogItem item in reportList)
{
iCounter += 1;
Debug.Print(iCounter.ToString() + "]#########################################");
if (item.TypeName == "Report")
{
Debug.Print("Report: " + item.Name);
ResetTheDataSource_for_a_Report(item.Path, "/DataSources/Shared_New"); //<=== This is the DataSource that I want them to use
}
}
}
private static void ResetTheDataSource_for_a_Report(string sPathAndFileNameOfTheReport, string sPathAndFileNameForDataSource)
{
//from: http://stackoverflow.com/questions/13144604/ssrs-reportingservice2010-change-embedded-datasource-to-shared-datasource
ReportingService2010 rs = new ReportingService2010();
rs.Credentials = System.Net.CredentialCache.DefaultCredentials;
string reportPathAndName = sPathAndFileNameOfTheReport;
//example of sPathAndFileNameOfTheReport "/0_Contacts/207_Practices_County_CareManager_Role_ContactInfo";
List<ReportService2010.ItemReference> itemRefs = new List<ReportService2010.ItemReference>();
ReportService2010.DataSource[] itemDataSources = rs.GetItemDataSources(reportPathAndName);
foreach (ReportService2010.DataSource itemDataSource in itemDataSources)
{
ReportService2010.ItemReference itemRef = new ReportService2010.ItemReference();
itemRef.Name = itemDataSource.Name;
//example of DataSource i.e. 'itemRef.Reference': "/DataSources/SharedDataSource_DB2_CRM";
itemRef.Reference = sPathAndFileNameForDataSource;
itemRefs.Add(itemRef);
}
rs.SetItemReferences(reportPathAndName, itemRefs.ToArray());
}
}
To Call it I use this in the 'Main' Method:
GetListOfObjectsInGivenFolder_and_ResetTheReportDataSource("0_Contacts");
In this case "0_Contacts" is the parent folder, itself located in the root directory, that contains all the reports for which I want to reset their DataSources to the new Shared DataSource. Then that Method calls the other method "ResetTheDataSource_for_a_Report" which actually sets the DataSource for the report.

Efficient way to run multiple scripts using javax.script

I am developing a game where I'd like to have multiple scripts that all implement the same structure. Each script would need to be run in its own scope so that code doesn't overlap other scripts. For example:
structure.js
function OnInit() {
// Define resources to load, collision vars, etc.
}
function OnLoop() {
// Every loop
}
function ClickEvent() {
// Someone clicked me
}
// Other fun functions
Now, lets say I have: "BadGuy.js", "ReallyReallyBadGuy.js", "OtherBadGuy.js" - They all look like the above in terms of structure. Within the game whenever an event takes place, I'd like to invoke the appropriate function.
The problem comes down to efficiency and speed. I found a working solution by creating an engine for each script instance (using getEngineByName), but that just doesn't seem ideal to me.
If there isn't a better solution, I'll probably resort to each script having its own unique class / function names. I.e.
BadGuy.js
var BadGuy = new Object();
BadGuy.ClickEvent = function() {
}
I don't think you need to create a new ScriptEngine for every "Guy". You can manage them all in one engine. So with advance apologies for butchering you game scenario.....
Get one instance of the Rhino engine.
Issue eval(script) statements to add new JS Objects to the engine, along with the different behaviours (or functions) that you want these Objects to support.
You have a couple of different choices for invoking against each one, but as long as each "guy" has a unique name, you can always reference them by name and invoke a named method against it.
For more performance sensitive operations (perhaps some sort of round based event loop) you can precompile a script in the same engine which can then be executed without having to re-evaluate the source.
Here's a sample I wrote in Groovy.
import javax.script.*;
sem = new ScriptEngineManager();
engine = sem.getEngineByExtension("js");
engine.getBindings(ScriptContext.ENGINE_SCOPE).put("out", System.out);
eventLoop = "for(guy in allGuys) { out.println(allGuys[guy].Action(action)); }; "
engine.eval("var allGuys = []");
engine.eval("var BadGuy = new Object(); allGuys.push(BadGuy); BadGuy.ClickEvent = function() { return 'I am a BadGuy' }; BadGuy.Action = function(activity) { return 'I am doing ' + activity + ' in a BAD way' }");
engine.eval("var GoodGuy = new Object(); allGuys.push(GoodGuy); GoodGuy.ClickEvent = function() { return 'I am a GoodGuy' }; GoodGuy.Action = function(activity) { return 'I am doing ' + activity + ' in a GOOD way' }");
CompiledScript executeEvents = engine.compile(eventLoop);
println engine.invokeMethod(engine.get("BadGuy"), "ClickEvent");
println engine.invokeMethod(engine.get("GoodGuy"), "ClickEvent");
engine.getBindings(ScriptContext.ENGINE_SCOPE).put("action", "knitting");
executeEvents.eval();

Scoping in embedded groovy scripts

In my app, I use Groovy as a scripting language. To make things easier for my customers, I have a global scope where I define helper classes and constants.
Currently, I need to run the script (which builds the global scope) every time a user script is executed:
context = setupGroovy();
runScript( context, "global.groovy" ); // Can I avoid doing this step every time?
runScript( context, "user.groovy" );
Is there a way to setup this global scope once and just tell the embedded script interpreter: "Look here if you can't find a variable"? That way, I could run the global script once.
Note: Security is not an issue here but if you know a way to make sure the user can't modify the global scope, that's an additional plus.
Shamelessly stolen from groovy.codehaus :
The most complete solution for people
who want to embed groovy scripts into
their servers and have them reloaded
on modification is the
GroovyScriptEngine. You initialize the
GroovyScriptEngine with a set of
CLASSPATH like roots that can be URLs
or directory names. You can then
execute any Groovy script within those
roots. The GSE will also track
dependencies between scripts so that
if any dependent script is modified
the whole tree will be recompiled and
reloaded.
Additionally, each time you run a
script you can pass in a Binding that
contains properties that the script
can access. Any properties set in the
script will also be available in that
binding after the script has run. Here
is a simple example:
/my/groovy/script/path/hello.groovy:
output = "Hello, ${input}!"
import groovy.lang.Binding;
import groovy.util.GroovyScriptEngine;
String[] roots = new String[] { "/my/groovy/script/path" };
GroovyScriptEngine gse = new GroovyScriptEngine(roots);
Binding binding = new Binding();
binding.setVariable("input", "world");
gse.run("hello.groovy", binding);
System.out.println(binding.getVariable("output"));
This will print "Hello, world!".
Found: here
Would something like that work for you?
A simple solution is to use the code from groovy.lang.GroovyShell: You can precompile the script like so:
GroovyCodeSource gcs = AccessController.doPrivileged( new PrivilegedAction<GroovyCodeSource>() {
public GroovyCodeSource run() {
return new GroovyCodeSource( scriptCode, fileName, GroovyShell.DEFAULT_CODE_BASE );
}
} );
GroovyClassLoader loader = AccessController.doPrivileged( new PrivilegedAction<GroovyClassLoader>() {
public GroovyClassLoader run() {
return new GroovyClassLoader( parentLoader, CompilerConfiguration.DEFAULT );
}
} );
Class<?> scriptClass = loader.parseClass( gcs, false );
That's was the expensive part. Now use InvokeHelper to bind the compiled code to a context (with global variables) and run it:
Binding context = new javax.script.Binding();
Script script = InvokerHelper.createScript(scriptClass, context);
script.run();

dojo.requireIf does not allow local variables

I've been trying to use dojo.require(If) with a local variable to dynamically load a module on a page based on a condition.
// note: dojo v1.4
djConfig = {
debugAtAllCosts: true
};
Example 1 (does not work):
(function() {
var nameOfClass = "Two";
dojo.require("my.namespace." + nameOfClass);
dojo.addOnLoad(function() {
var oneOrTwo = new my.namespace[nameOfClass]();
});
}());
Error: ReferenceError: nameOfClass is not defined.
Example 2 (does not work):
(function() {
var nameOfClass = "Two";
dojo.requireIf(nameOfClass == "One", "my.namespace.One");
dojo.requireIf(nameOfClass == "Two", "my.namespace.Two");
dojo.addOnLoad(function() {
var oneOrTwo = new my.namespace[nameOfClass]();
});
}());
Error: ReferenceError: nameOfClass is not defined.
Example 3 (works):
(function() {
window.nameOfClass = "Two";
dojo.requireIf(window.nameOfClass == "One", "my.namespace.One");
dojo.requireIf(window.nameOfClass == "Two", "my.namespace.Two");
dojo.addOnLoad(function() {
var oneOrTwo = new my.namespace[nameOfClass]();
});
}());
For some reason, it appears as though require and requireIf only allow global variables inside them. Is that a current limitation, or am I just doing something wrong?
Update 1:
Therefore, if I understand you (#Maine, #jrburke) correctly, this is a limitation of the debugAtAllCosts? If the above code is built as cross-domain (adding the xd file prefix / suffix) and is executed -- it will work as expected?
If that is the case, then what is the proper way of locally testing code that will be executed as cross-domain, without making the actual build?
That also makes me question the motivation for pre-parsing the dojo.require(s). If the loader_xd will not (or rather can not) pre-parse, why is the method that was created for testing/debugging doing so?
Update 2:
Since the two questions in the Update 1 above are not closely related to this one, I've moved them out into a separate discussion.
This is because requireIfs are parsed with regexps as the very first thing, and executed before the normal program flow.
If you'll grep Dojo source for requireIf, you should find this kind of lines handling it (loader_xd.js):
var depRegExp = /dojo.(require|requireIf|provide|requireAfterIf|platformRequire|requireLocalization)\s*\(([\w\W]*?)\)/mg;
The condition is then executed with eval in global scope, and not as a part of normal flow.
To clarify more of what Main said, this is an issue with the XD loader in Dojo. debugAtAllCosts: true uses the XD Loader. If you just use the normal Dojo loader without debugAtAllCosts, it is not an issue. Also, attaching the module module name as a property on a publicly visible module would also avoid the issue.