Efficient way to run multiple scripts using javax.script - javax.script

I am developing a game where I'd like to have multiple scripts that all implement the same structure. Each script would need to be run in its own scope so that code doesn't overlap other scripts. For example:
structure.js
function OnInit() {
// Define resources to load, collision vars, etc.
}
function OnLoop() {
// Every loop
}
function ClickEvent() {
// Someone clicked me
}
// Other fun functions
Now, lets say I have: "BadGuy.js", "ReallyReallyBadGuy.js", "OtherBadGuy.js" - They all look like the above in terms of structure. Within the game whenever an event takes place, I'd like to invoke the appropriate function.
The problem comes down to efficiency and speed. I found a working solution by creating an engine for each script instance (using getEngineByName), but that just doesn't seem ideal to me.
If there isn't a better solution, I'll probably resort to each script having its own unique class / function names. I.e.
BadGuy.js
var BadGuy = new Object();
BadGuy.ClickEvent = function() {
}

I don't think you need to create a new ScriptEngine for every "Guy". You can manage them all in one engine. So with advance apologies for butchering you game scenario.....
Get one instance of the Rhino engine.
Issue eval(script) statements to add new JS Objects to the engine, along with the different behaviours (or functions) that you want these Objects to support.
You have a couple of different choices for invoking against each one, but as long as each "guy" has a unique name, you can always reference them by name and invoke a named method against it.
For more performance sensitive operations (perhaps some sort of round based event loop) you can precompile a script in the same engine which can then be executed without having to re-evaluate the source.
Here's a sample I wrote in Groovy.
import javax.script.*;
sem = new ScriptEngineManager();
engine = sem.getEngineByExtension("js");
engine.getBindings(ScriptContext.ENGINE_SCOPE).put("out", System.out);
eventLoop = "for(guy in allGuys) { out.println(allGuys[guy].Action(action)); }; "
engine.eval("var allGuys = []");
engine.eval("var BadGuy = new Object(); allGuys.push(BadGuy); BadGuy.ClickEvent = function() { return 'I am a BadGuy' }; BadGuy.Action = function(activity) { return 'I am doing ' + activity + ' in a BAD way' }");
engine.eval("var GoodGuy = new Object(); allGuys.push(GoodGuy); GoodGuy.ClickEvent = function() { return 'I am a GoodGuy' }; GoodGuy.Action = function(activity) { return 'I am doing ' + activity + ' in a GOOD way' }");
CompiledScript executeEvents = engine.compile(eventLoop);
println engine.invokeMethod(engine.get("BadGuy"), "ClickEvent");
println engine.invokeMethod(engine.get("GoodGuy"), "ClickEvent");
engine.getBindings(ScriptContext.ENGINE_SCOPE).put("action", "knitting");
executeEvents.eval();

Related

Complex function using Parse Server Cloud Code (looping and creating records)

After a night of trial and error I have decided on a much simpler way to explain my issue. Again, I have no JS experience, so I don't really know what I am doing.
I have 5 classes:
game - holds information about my games
classification - holds information about the user classes available in games
game_classifications - creates a one game to many classifications relationship (makes a game have mulitple classes)
mission - holds my mission information
mission_class - creates a one to many relationship between a mission and the classes available for that mission
Using Cloud Code, I want to provide two inputs through my Rest API being missionObjectId and gameObjectId.
The actual steps I need the code to perform are:
Get the two inputs provided {"missionObjectId":"VALUE","gameObjectId":"VALUE"}
Search the game_classifications class for all records where game = gameObjectID
For each returned record, create a new record in mission_class with the following information:
mission_id = missionObjectId
classification = result.classification
Here is an image of the tables:
And here is how I have tried to achieve this:
Parse.Cloud.define("activateMission", async (request) => {
Parse.Cloud.useMasterKey();
const query = new Parse.query('game_classifications');
query.equalTo("gameObjectId", request.params.gameObjectId);
for (let i = 0; i < query.length; i ++) {
const mission_classification = Parse.Object.extend("mission_class");
const missionClass = new mission_classification();
missionClass.set("mission_id", request.params.missionObjectId);
missionClass.set("classification_id", query[i].classificationObjectId);
return missionClass.save();
}
});
Does anyone have any advice or input that might help me achieve this goal?
The current error I am getting is:
Parse.query is not a constructor
Thank you all in advance!
Some problems on your current code:
Parse.Cloud.useMasterKey() does not exist for quite a long time. Use useMasterKey option instead.
It's Parse.Query and not Parse.query.
You need to run query.findAll() command and iterate over it (and not over query).
For performance, move Parse.Object.extend calls to the beginning of the file.
To access the field of an object, use obj.get('fieldName') and not obj.fieldName.
If you return the save operation, it will save the first object, return, and not save the others.
So, the code needs to be something like this:
const mission_classification = Parse.Object.extend("mission_class");
const game = Parse.Object.extend("game");
Parse.Cloud.define("activateMission", async (request) => {
const query = new Parse.Query('game_classifications');
const gameObj = new game();
gameObj.id = request.params.gameObjectId;
query.equalTo("gameObjectId", gameObj);
const queryResults = await query.findAll({useMasterKey: true});
for (let i = 0; i < queryResults.length; i++) {
const missionClass = new mission_classification();
missionClass.set("mission_id", request.params.missionObjectId);
missionClass.set("classification_id", queryResults[i].get('classificationObjectId'));
await missionClass.save(null, { useMasterKey: true });
}
});

Redis StackExchange LuaScripts with parameters

I'm trying to use the following Lua script using C# StackExchange library:
private const string LuaScriptToExecute = #"
local current
current = redis.call(""incr"", KEYS[1])
if current == 1 then
redis.call(""expire"", KEYS[1], KEYS[2])
return 1
else
return current
end
Whenever i'm evaluating the script "as a string", it works properly:
var incrementValue = await Database.ScriptEvaluateAsync(LuaScriptToExecute,
new RedisKey[] { key, ttlInSeconds });
If I understand correctly, each time I invoke the ScriptEvaluateAsync method, the script is transmitted to the redis server which is not very effective.
To overcome this, I tried using the "prepared script" approach, by running:
_setCounterWithExpiryScript = LuaScript.Prepare(LuaScriptToExecute);
...
...
var incrementValue = await Database.ScriptEvaluateAsync(_setCounterWithExpiryScript,
new[] { key, ttlInSeconds });
Whenever I try to use this approach, I receive the following error:
ERR Error running script (call to f_7c891a96328dfc3aca83aa6fb9340674b54c4442): #user_script:3: #user_script: 3: Lua redis() command arguments must be strings or integers
What am I doing wrong?
What is the right approach in using "prepared" LuaScripts that receive dynamic parameters?
If I look in the documentation: no idea.
If I look in the unit test on github it looks really easy.
(by the way, is your ttlInSeconds really RedisKey and not RedisValue? You are accessing it thru KEYS[2] - shouldnt that be ARGV[1]? Anyway...)
It looks like you should rewrite your script to use named parameters and not arguments:
private const string LuaScriptToExecute = #"
local current
current = redis.call(""incr"", #myKey)
if current == 1 then
redis.call(""expire"", #myKey, #ttl)
return 1
else
return current
end";
// We should load scripts to whole redis cluster. Even when we dont have any.
// In that case, there will be only one EndPoint, one iteration etc...
_myScripts = _redisMultiplexer.GetEndPoints()
.Select(endpoint => _redisMultiplexer.GetServer(endpoint))
.Where(server => server != null)
.Select(server => lua.Load(server))
.ToArray();
Then just execute it with anonymous class as parameter:
for(var setCounterWithExpiryScript in _myScripts)
{
var incrementValue = await Database.ScriptEvaluateAsync(
setCounterWithExpiryScript,
new {
myKey: (RedisKey)key, // or new RedisKey(key) or idk
ttl: (RedisKey)ttlInSeconds
}
)// .ConfigureAwait(false); // ? ;-)
// when ttlInSeconds is value and not key, just dont cast it to RedisKey
/*
var incrementValue = await
Database.ScriptEvaluateAsync(
setCounterWithExpiryScript,
new {
myKey: (RedisKey)key,
ttl: ttlInSeconds
}
).ConfigureAwait(false);*/
}
Warning:
Please note that Redis is in full-stop mode when executing scripts. Your script looks super-easy (you sometimes save one trip to redis (when current != 1) so i have a feeling that this script will be counter productive in greater-then-trivial scale. Just do one or two calls from c# and dont bother with this script.
First of all, Jan's comment above is correct.
The script line that updated the key's TTL should be redis.call(""expire"", KEYS[1], ARGV[1]).
Regarding the issue itself, after searching for similar issues in RedisStackExchange's Github, I found that Lua scripts do not work really well in cluster mode.
Fortunately, it seems that "loading the scripts" isn't really necessary.
The ScriptEvaluateAsync method works properly in cluster mode and is sufficient (caching-wise).
More details can be found in the following Github issue.
So at the end, using ScriptEvaluateAsync without "preparing the script" did the job.
As a side note about Jan's comment above that this script isn't needed and can be replaced with two C# calls, it is actually quite important since this operation should be atomic as it is a "Rate limiter" pattern.

How can I get a list of variables and assignments / default values?

We have a collection of Microsoft Windows Workflows (.xaml files) that I need to go through and inventory the variables. The workflows are complicated with variables scoped at many levels so I can't simply open up the Workflow xaml and look at the Variables tab at the top level; I need to dig through each level, sequence, etc. to find all possible variable definitions.
Can I automate this process? Can Visual Studio aid in this process?
One solution, I could write some code to read the workflow file, look for variables, grab any default values, and check if the variable is assigned, thus overriding the default. Technically, this is possible from C#. But is this solution really necessary to get the information?
You can use a recursive function like this:
List<Variable> Variables;
private void GetVariables(DynamicActivity act)
{
Variables = new List<Variable>();
InspectActivity(act);
}
private void InspectActivity(Activity root)
{
IEnumerator<Activity> activities = WorkflowInspectionServices.GetActivities(root).GetEnumerator();
while (activities.MoveNext())
{
PropertyInfo propVars = activities.Current.GetType().GetProperties().FirstOrDefault(p => p.Name == "Variables" && p.PropertyType == typeof(Collection<Variable>));
if (propVars != null)
{
try
{
Collection<Variable> variables = (Collection<Variable>)propVars.GetValue(activities.Current, null);
variables.ToList().ForEach(v =>
{
Variables.Add(v);
});
}
catch
{
}
}
InspectActivity(activities.Current);
}
}
And should be called like this:
using (MemoryStream stream = new MemoryStream(Encoding.UTF8.GetBytes(xamlData)))
{
XamlXmlReaderSettings readerSettings = new XamlXmlReaderSettings()
{
LocalAssembly = Assembly.GetExecutingAssembly()
};
var xamlReader = new XamlXmlReader(stream, readerSettings);
Activity activity = ActivityXamlServices.Load(xamlReader);
DynamicActivity root = activity as DynamicActivity;
GetVariables(root);
}
Credit to: https://stackoverflow.com/a/11429284/593609

Delaying writes to SQL Server

I am working on an app, and need to keep track of how any views a page has. Almost like how SO does it. It is a value used to determine how popular a given page is.
I am concerned that writing to the DB every time a new view needs to be recorded will impact performance. I know this borderline pre-optimization, but I have experienced the problem before. Anyway, the value doesn't need to be real time; it is OK if it is delayed by 10 minutes or so. I was thinking that caching the data, and doing one large write every X minutes should help.
I am running on Windows Azure, so the Appfabric cache is available to me. My original plan was to create some sort of compound key (PostID:UserID), and tag the key with "pageview". Appfabric allows you to get all keys by tag. Thus I could let them build up, and do one bulk insert into my table instead of many small writes. The table looks like this, but is open to change.
int PageID | guid userID | DateTime ViewTimeStamp
The website would still get the value from the database, writes would just be delayed, make sense?
I just read that the Windows Azure Appfabric cache does not support tag based searches, so it pretty much negates my idea.
My question is, how would you accomplish this? I am new to Azure, so I am not sure what my options are. Is there a way to use the cache without tag based searches? I am just looking for advice on how to delay these writes to SQL.
You might want to take a look at http://www.apathybutton.com (and the Cloud Cover episode it links to), which talks about a highly scalable way to count things. (It might be overkill for your needs, but hopefully it gives you some options.)
You could keep a queue in memory and on a timer drain the queue, collapse the queued items by totaling the counts by page and write in one SQL batch/round trip. For example, using a TVP you could write the queued totals with one sproc call.
That of course doesn't guarantee the view counts get written since its in memory and latently written but page counts shouldn't be critical data and crashes should be rare.
You might want to have a look at how the "diagnostics" feature in Azure works. Not because you would use diagnostics for what you are doing at all, but because it is dealing with a similar problem and may provide some inspiration. I am just about to implement a data auditing feature and I want to log that to table storage so also want to delay and bunch the updates together and I have taken a lot of inspiration from diagnostics.
Now, the way Diagnostics in Azure works is that each role starts a little background "transfer" thread. So, whenever you write any traces then that gets stored in a list in local memory and the background thread will (by default) bunch all the requests up and transfer them to table storage every minute.
In your scenario, I would let each role instance keep track of a count of hits and then use a background thread to update the database every minute or so.
I would probably use something like a static ConcurrentDictionary (or one hanging off a singleton) on each webrole with each hit incrementing the counter for the page identifier. You'd need to have some thread handling code to allow multiple request to update the same counter in the list. Alternatively, just allow each "hit" to add a new record to a shared thread-safe list.
Then, have a background thread once per minute increment the database with the number of hits per page since last time and reset the local counter to 0 or empty the shared list if you are going with that approach (again, be careful about the multi threading and locking).
The important thing is to make sure your database update is atomic; If you do a read-current-count from the database, increment it and then write it back then you may have two different web role instances doing this at the same time and thus losing one update.
EDIT:
Here is a quick sample of how you could go about this.
using System.Collections.Concurrent;
using System.Data.SqlClient;
using System.Threading;
using System;
using System.Collections.Generic;
using System.Linq;
class Program
{
static void Main(string[] args)
{
// You would put this in your Application_start for the web role
Thread hitTransfer = new Thread(() => HitCounter.Run(new TimeSpan(0, 0, 1))); // You'd probably want the transfer to happen once a minute rather than once a second
hitTransfer.Start();
//Testing code - this just simulates various web threads being hit and adding hits to the counter
RunTestWorkerThreads(5);
Thread.Sleep(5000);
// You would put the following line in your Application shutdown
HitCounter.StopRunning(); // You could do some cleverer stuff with aborting threads, joining the thread etc but you probably won't need to
Console.WriteLine("Finished...");
Console.ReadKey();
}
private static void RunTestWorkerThreads(int workerCount)
{
Thread[] workerThreads = new Thread[workerCount];
for (int i = 0; i < workerCount; i++)
{
workerThreads[i] = new Thread(
(tagname) =>
{
Random rnd = new Random();
for (int j = 0; j < 300; j++)
{
HitCounter.LogHit(tagname.ToString());
Thread.Sleep(rnd.Next(0, 5));
}
});
workerThreads[i].Start("TAG" + i);
}
foreach (var t in workerThreads)
{
t.Join();
}
Console.WriteLine("All threads finished...");
}
}
public static class HitCounter
{
private static System.Collections.Concurrent.ConcurrentQueue<string> hits;
private static object transferlock = new object();
private static volatile bool stopRunning = false;
static HitCounter()
{
hits = new ConcurrentQueue<string>();
}
public static void LogHit(string tag)
{
hits.Enqueue(tag);
}
public static void Run(TimeSpan transferInterval)
{
while (!stopRunning)
{
Transfer();
Thread.Sleep(transferInterval);
}
}
public static void StopRunning()
{
stopRunning = true;
Transfer();
}
private static void Transfer()
{
lock(transferlock)
{
var tags = GetPendingTags();
var hitCounts = from tag in tags
group tag by tag
into g
select new KeyValuePair<string, int>(g.Key, g.Count());
WriteHits(hitCounts);
}
}
private static void WriteHits(IEnumerable<KeyValuePair<string, int>> hitCounts)
{
// NOTE: I don't usually use sql commands directly and have not tested the below
// The idea is that the update should be atomic so even though you have multiple
// web servers all issuing similar update commands, potentially at the same time,
// they should all commit. I do urge you to test this part as I cannot promise this code
// will work as-is
//using (SqlConnection con = new SqlConnection("xyz"))
//{
// foreach (var hitCount in hitCounts.OrderBy(h => h.Key))
// {
// var cmd = con.CreateCommand();
// cmd.CommandText = "update hits set count = count + #count where tag = #tag";
// cmd.Parameters.AddWithValue("#count", hitCount.Value);
// cmd.Parameters.AddWithValue("#tag", hitCount.Key);
// cmd.ExecuteNonQuery();
// }
//}
Console.WriteLine("Writing....");
foreach (var hitCount in hitCounts.OrderBy(h => h.Key))
{
Console.WriteLine(String.Format("{0}\t{1}", hitCount.Key, hitCount.Value));
}
}
private static IEnumerable<string> GetPendingTags()
{
List<string> hitlist = new List<string>();
var currentCount = hits.Count();
for (int i = 0; i < currentCount; i++)
{
string tag = null;
if (hits.TryDequeue(out tag))
{
hitlist.Add(tag);
}
}
return hitlist;
}
}

Encapsulating common logic (domain driven design, best practices)

Updated: 09/02/2009 - Revised question, provided better examples, added bounty.
Hi,
I'm building a PHP application using the data mapper pattern between the database and the entities (domain objects). My question is:
What is the best way to encapsulate a commonly performed task?
For example, one common task is retrieving one or more site entities from the site mapper, and their associated (home) page entities from the page mapper. At present, I would do that like this:
$siteMapper = new Site_Mapper();
$site = $siteMapper->findByid(1);
$pageMapper = new Page_Mapper();
$site->addPage($pageMapper->findHome($site->getId()));
Now that's a fairly trivial example, but it gets more complicated in reality, as each site also has an associated locale, and the page actually has multiple revisions (although for the purposes of this task I'd only be interested in the most recent one).
I'm going to need to do this (get the site and associated home page, locale etc.) in multiple places within my application, and I cant think of the best way/place to encapsulate this task, so that I don't have to repeat it all over the place. Ideally I'd like to end up with something like this:
$someObject = new SomeClass();
$site = $someObject->someMethod(1); // or
$sites = $someObject->someOtherMethod();
Where the resulting site entities already have their associated entities created and ready for use.
The same problem occurs when saving these objects back. Say I have a site entity and associated home page entity, and they've both been modified, I have to do something like this:
$siteMapper->save($site);
$pageMapper->save($site->getHomePage());
Again, trivial, but this example is simplified. Duplication of code still applies.
In my mind it makes sense to have some sort of central object that could take care of:
Retrieving a site (or sites) and all nessessary associated entities
Creating new site entities with new associated entities
Taking a site (or sites) and saving it and all associated entities (if they've changed)
So back to my question, what should this object be?
The existing mapper object?
Something based on the repository pattern?*
Something based on the unit of work patten?*
Something else?
* I don't fully understand either of these, as you can probably guess.
Is there a standard way to approach this problem, and could someone provide a short description of how they'd implement it? I'm not looking for anyone to provide a fully working implementation, just the theory.
Thanks,
Jack
Using the repository/service pattern, your Repository classes would provide a simple CRUD interface for each of your entities, then the Service classes would be an additional layer that performs additional logic like attaching entity dependencies. The rest of your app then only utilizes the Services. Your example might look like this:
$site = $siteService->getSiteById(1); // or
$sites = $siteService->getAllSites();
Then inside the SiteService class you would have something like this:
function getSiteById($id) {
$site = $siteRepository->getSiteById($id);
foreach ($pageRepository->getPagesBySiteId($site->id) as $page)
{
$site->pages[] = $page;
}
return $site;
}
I don't know PHP that well so please excuse if there is something wrong syntactically.
[Edit: this entry attempts to address the fact that it is oftentimes easier to write custom code to directly deal with a situation than it is to try to fit the problem into a pattern.]
Patterns are nice in concept, but they don't always "map". After years of high end PHP development, we have settled on a very direct way of handling such matters. Consider this:
File: Site.php
class Site
{
public static function Select($ID)
{
//Ensure current user has access to ID
//Lookup and return data
}
public static function Insert($aData)
{
//Validate $aData
//In the event of errors, raise a ValidationError($ErrorList)
//Do whatever it is you are doing
//Return new ID
}
public static function Update($ID, $aData)
{
//Validate $aData
//In the event of errors, raise a ValidationError($ErrorList)
//Update necessary fields
}
Then, in order to call it (from anywhere), just run:
$aData = Site::Select(123);
Site::Update(123, array('FirstName' => 'New First Name'));
$ID = Site::Insert(array(...))
One thing to keep in mind about OO programming and PHP... PHP does not keep "state" between requests, so creating an object instance just to have it immediately destroyed does not often make sense.
I'd probably start by extracting the common task to a helper method somewhere, then waiting to see what the design calls for. It feels like it's too early to tell.
What would you name this method ? The name usually hints at where the method belongs.
class Page {
public $id, $title, $url;
public function __construct($id=false) {
$this->id = $id;
}
public function save() {
// ...
}
}
class Site {
public $id = '';
public $pages = array();
function __construct($id) {
$this->id = $id;
foreach ($this->getPages() as $page_id) {
$this->pages[] = new Page($page_id);
}
}
private function getPages() {
// ...
}
public function addPage($url) {
$page = ($this->pages[] = new Page());
$page->url = $url;
return $page;
}
public function save() {
foreach ($this->pages as $page) {
$page->save();
}
// ..
}
}
$site = new Site($id);
$page = $site->addPage('/');
$page->title = 'Home';
$site->save();
Make your Site object an Aggregate Root to encapsulate the complex association and ensure consistency.
Then create a SiteRepository that has the responsibility of retrieving the Site aggregate and populating its children (including all Pages).
You will not need a separate PageRepository (assuming that you don't make Page a separate Aggregate Root), and your SiteRepository should have the responsibility of retrieving the Page objects as well (in your case by using your existing Mappers).
So:
$siteRepository = new SiteRepository($myDbConfig);
$site = $siteRepository->findById(1); // will have Page children attached
And then the findById method would be responsible for also finding all Page children of the Site. This will have a similar structure to the answer CodeMonkey1 gave, however I believe you will benefit more by using the Aggregate and Repository patterns, rather than creating a specific Service for this task. Any other retrieval/querying/updating of the Site aggregate, including any of its child objects, would be done through the same SiteRepository.
Edit: Here's a short DDD Guide to help you with the terminology, although I'd really recommend reading Evans if you want the whole picture.