Add help documentation for new command loaded from external redis module within redis-cli - redis

Helps on commands within redis-cli are stored in redis/src/help.h.
I would like to provide my help for commands loaded via redis module (using loadmodule). I could find relevant information from Redis Modules: an introduction to the API
Do you have any suggestion?

I did a check on redis/src/redis-cli.c, the help is created during compile time. Currently there is no possibility to do that.
static void cliInitHelp(void) {
int commandslen = sizeof(commandHelp)/sizeof(struct commandHelp);
int groupslen = sizeof(commandGroups)/sizeof(char*);
int i, len, pos = 0;
helpEntry tmp;
helpEntriesLen = len = commandslen+groupslen;
helpEntries = zmalloc(sizeof(helpEntry)*len);
for (i = 0; i < groupslen; i++) {
tmp.argc = 1;
tmp.argv = zmalloc(sizeof(sds));
tmp.argv[0] = sdscatprintf(sdsempty(),"#%s",commandGroups[i]);
tmp.full = tmp.argv[0];
tmp.type = CLI_HELP_GROUP;
tmp.org = NULL;
helpEntries[pos++] = tmp;
}
for (i = 0; i < commandslen; i++) {
tmp.argv = sdssplitargs(commandHelp[i].name,&tmp.argc);
tmp.full = sdsnew(commandHelp[i].name);
tmp.type = CLI_HELP_COMMAND;
tmp.org = &commandHelp[i];
helpEntries[pos++] = tmp;
}
}
Redis module developers should not write their module command document in redis/src/help/h. I would suggest the following:
Using a new Module API function, module developer register new command documentation (consisting of command syntax, summary, since, group) into a system hash.
redis-cli reads additional command documentation from the system hash, to populate the helpEntries[].

Related

Apache-ignite: Persistent Storage

My understanding for Ignite Persistent Storage is that the data is not only saved in memory, but also written to disk.
When the node is restarted, it should read the data from disk to memory.
So, I am using this example to test it out. But I update it a little bit because I don't want to use xml.
This is my slightly updated code.
public class PersistentIgniteExpr {
/**
* Organizations cache name.
*/
private static final String ORG_CACHE = "CacheQueryExample_Organizations";
/** */
private static final boolean UPDATE = true;
public void test(String nodeId) {
// Apache Ignite node configuration.
IgniteConfiguration cfg = new IgniteConfiguration();
// Ignite persistence configuration.
DataStorageConfiguration storageCfg = new DataStorageConfiguration();
// Enabling the persistence.
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
// Applying settings.
cfg.setDataStorageConfiguration(storageCfg);
List<String> addresses = new ArrayList<>();
addresses.add("127.0.0.1:47500..47502");
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
tcpDiscoverySpi.setIpFinder(new TcpDiscoveryMulticastIpFinder().setAddresses(addresses));
cfg.setDiscoverySpi(tcpDiscoverySpi);
try (Ignite ignite = Ignition.getOrStart(cfg.setIgniteInstanceName(nodeId))) {
// Activate the cluster. Required to do if the persistent store is enabled because you might need
// to wait while all the nodes, that store a subset of data on disk, join the cluster.
ignite.active(true);
CacheConfiguration<Long, Organization> cacheCfg = new CacheConfiguration<>(ORG_CACHE);
cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cacheCfg.setBackups(1);
cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheCfg.setIndexedTypes(Long.class, Organization.class);
IgniteCache<Long, Organization> cache = ignite.getOrCreateCache(cacheCfg);
if (UPDATE) {
System.out.println("Populating the cache...");
try (IgniteDataStreamer<Long, Organization> streamer = ignite.dataStreamer(ORG_CACHE)) {
streamer.allowOverwrite(true);
for (long i = 0; i < 100_000; i++) {
streamer.addData(i, new Organization(i, "organization-" + i));
if (i > 0 && i % 10_000 == 0)
System.out.println("Done: " + i);
}
}
}
// Run SQL without explicitly calling to loadCache().
QueryCursor<List<?>> cur = cache.query(
new SqlFieldsQuery("select id, name from Organization where name like ?")
.setArgs("organization-54321"));
System.out.println("SQL Result: " + cur.getAll());
// Run get() without explicitly calling to loadCache().
Organization org = cache.get(54321l);
System.out.println("GET Result: " + org);
}
}
}
When I run the first time, it works as intended.
After running it one time, I am assuming that data is written to disk since the code is about persistent storage.
When I run the second time, I commented out this part.
if (UPDATE) {
System.out.println("Populating the cache...");
try (IgniteDataStreamer<Long, Organization> streamer = ignite.dataStreamer(ORG_CACHE)) {
streamer.allowOverwrite(true);
for (long i = 0; i < 100_000; i++) {
streamer.addData(i, new Organization(i, "organization-" + i));
if (i > 0 && i % 10_000 == 0)
System.out.println("Done: " + i);
}
}
}
That is the part where data is written. When the sql query is executed, it is returning null. That means data is not written to disk?
Another question is I am not very clear about TcpDiscoverySpi. Can someone explain about it as well?
Thanks in advance.
Do you have any exceptions at node startup?
Very probably, you don't have IGNITE_HOME env variable configured. And the Work Directory for persistence is chosen somehow differently each time you run a node.
You can either setup IGNITE_HOME env variable or add a code line to setup workDirectory explicitly: cfg.setWorkDirectory("C:\\workDirectory");
TcpDiscoverySpi provides a way to discover remote nodes in a grid, so the starting node can join a cluster. It is better to use TcpDiscoveryVmIpFinder if you know the list of IPs. TcpDiscoveryMulticastIpFinder broadcasts UDP messages to a network to discover other nodes. It does not require IPs list at all.
Please see https://apacheignite.readme.io/docs/cluster-config for more details.

Read 'hidden' input for CLI Dart app

What's the best way to receive 'hidden' input from a command-line Dart application? For example, in Bash, this is accomplished with:
read -s SOME_VAR
Set io.stdin.echoMode to false:
import 'dart:io' as io;
void main() {
io.stdin.echoMode = false;
String input = io.stdin.readLineSync();
// or
var input;
while(input != 32) {
input = io.stdin.readByteSync();
if(input != 10) print(input);
}
// restore echoMode
io.stdin.echoMode = true;
}
This is a slightly extended version, key differences are that it uses a finally block to ensure the mode is reset if an exception is thrown whilst the code is executing.
The code also uses a waitFor call (only available in dart cli apps) to turn this code into a synchronous call. Given this is a cli command there is no need for the complications that futures bring to the table.
The code also does the classic output of '*' as you type.
If you are doing much cli work the below code is from the dart package I'm working on called dcli. Have a look at the 'ask' method.
https://pub.dev/packages/dcli
String readHidden() {
var line = <int>[];
try {
stdin.echoMode = false;
stdin.lineMode = false;
int char;
do {
char = stdin.readByteSync();
if (char != 10) {
stdout.write('*');
// we must wait for flush as only one flush can be outstanding at a time.
waitFor<void>(stdout.flush());
line.add(char);
}
} while (char != 10);
} finally {
stdin.echoMode = true;
stdin.lineMode = true;
}
// output a newline as we have suppressed it.
print('');
return Encoding.getByName('utf-8').decode(line);
}

How to use ngx_write_chain_to_temp_file correctly?

I am writing nginx module which construct nginx chain then write this chain buffer to nginx temporary file to use it later (just after write happen). I've been searching every page and the only solution come up is the one bellow:
// Create temp file to test
ngx_temp_file_t *tf;
tf = ngx_pcalloc(r->pool, sizeof (ngx_temp_file_t));
if (tf == NULL) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
tf->file.fd = NGX_INVALID_FILE;
tf->file.log = nlog;
tf->path = clcf->client_body_temp_path;
tf->pool = r->pool;
tf->log_level = r->request_body_file_log_level;
tf->persistent = r->request_body_in_persistent_file;
tf->clean = r->request_body_in_clean_file;
// if (r->request_body_file_group_access) {
// tf->access = 0660;
// }
if (ngx_create_temp_file(&tf->file, tf->path, tf->pool, tf->persistent, tf->clean, tf->access) != NGX_OK) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
if (ngx_write_chain_to_temp_file(tf, bucket->first) == NGX_ERROR) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
This code does not return NGX_ERROR, is this meant nginx successful write temporary file into client_body_temporay_path? It the answer is yes, after that, I use fopen to open file, the file is not exist?
Can anyone please give me the right solution to handle ngx_write_chain_to_temp_file?
I find myself the solution
ngx_temp_file_t *tf;
tf = ngx_pcalloc(r->pool, sizeof (ngx_temp_file_t));
tf->file.fd = NGX_INVALID_FILE;
tf->file.log = nlog;
tf->path = clcf->client_body_temp_path;
tf->pool = r->pool;
tf->persistent = 1;
rc = ngx_create_temp_file(&tf->file, tf->path, tf->pool, tf->persistent, tf->clean, tf->access);
//ngx_write_chain_to_file(&tf->file, bucket->first, bucket->content_length, r->pool);
ngx_write_chain_to_temp_file(tf, bucket->first);
The only thing I cannot understand is if I set tf->persistentto false (0), after the file created, I cannot read from it even if I've not passed response to output_filter yet.

Efficient Way to do batch import XMI in Enterprise Architect

Our team are using Enterprise Architect version 10 and SVN for the repository.
Because the EAP file size is quite big (e.g. 80 MB), we exports each packages into separate XMI and stored it into SVN. The EAP file itself is committed after some milestone. The problem is to synchronize the EAP file with work from co worker during development, we need to import lots of XMI (e.g. total can be 500 files).
I know that once the EAP file is updated, we can use Package Control -> Get All Latest. Therefore this problem occurs only during parallel development.
We have used keyboard shorcuts to do the import as follow:
Ctrl+Alt+I (Import package from XMI file)
Select the file name to import
Alt+I (Import)
Enter (Yes)
Repeat step number 2 to 4 until module finished
But still, importing hundreds of file is inefficient.
I've checked that the Control Package has Batch Import/Export. The batch import/export are working when I explicitly hard-coded the XMI Filename, but the options are not available if using version control (batch import/export options are greyed).
Is there any better ways to synchronize EAP and XMI files?
There is a scripting interface in EA. You might be able to automate the import using that. I've not used it but its probably quite good.
I'm not sure I fully understand your working environment, but I have some general points that may be of interest. It might be that if you use EA in a different way (especially my first point below), the need to batch import might go away.
Multiworker
First, multiple people can work on the same EAP file at a time. The EAP file is nothing more than an Access database file, and EA uses locking to stop multiple people editing the same package at the same time. But you can comfortably have multiple people editing different packages in one EAP file at the same time. Putting the EAP file on a file share somewhere is a good way of doing it.
Inbuilt Revision Control
Secondly, EA can interact directly with SVN (and other revision control systems). See this. In short, you can setup your EAP file so that individual packages (and everything below them) is SVN controlled. You can then check out an individual package, edit it, check it back in. Or indeed you can check out the whole branch below a package (including sub packages that are themselves SVN controlled).
Underneath the hood EA is importing and exporting XMI files and checking them in and out of SVN, whilst the EAP file is always the head revision. Just like what you're doing by hand, but automated. It makes sense given that you can all use the one single EAP file. You do have to be a bit careful rolling back - links originating from objects in older versions of one package might be pointing at objects that no longer exist (but you can look at the import log errors to see if this is the case). It takes a bit of getting used to, but it works pretty well.
There's also the built in package baselining functionality - that might be all you need anyway, and works quite well especially if you're all using the same EAP file.
Bigger Database Engine
Thirdly, you don't have to have an EAP file at all. The model's database can be in any suitable database system (MySQL, SQL Server, Oracle, etc). So that gives you all sorts of options for scaling up how its used, what its like over a WAN/Internet, etc.
In short Sparx have been quite sensible about how EA can be used in a multi-worker environment, and its worth exploiting that.
I have created the EA script using JScript for the automation
Here is the script to do the export:
!INC Local Scripts.EAConstants-JScript
/*
* Script Name : Export List of SVN Packages
* Author : SDK
* Purpose : Export a package and all of its subpackages information related to version
* controlled. The exported file then can be used to automatically import
* the XMIs
* Date : 30 July 2013
* HOW TO USE : 1. Select the package that you would like to export in the Project Browser
* 2. Change the output filepath in this script if necessary.
* By default it is "D:\\EAOutput.txt"
* 3. Send the output file to your colleague who wanted to import the XMIs
*/
var f;
function main()
{
// UPDATE THE FOLLOWING OUTPUT FILE PATH IF NECESSARY
var filename = "D:\\EAOutput.txt";
var ForReading = 1, ForWriting = 2, ForAppending = 8;
Repository.EnsureOutputVisible( "Script" );
Repository.ClearOutput( "Script" );
Session.Output("Start generating output...please wait...");
var treeSelectedType = Repository.GetTreeSelectedItemType();
switch ( treeSelectedType )
{
case otPackage:
{
var fso = new ActiveXObject("Scripting.FileSystemObject");
f = fso.OpenTextFile(filename, ForWriting, true);
var selectedObject as EA.Package;
selectedObject = Repository.GetContextObject();
reportPackage(selectedObject);
loopChildPackages(selectedObject);
f.Close();
Session.Output( "Done! Check your output at " + filename);
break;
}
default:
{
Session.Prompt( "This script does not support items of this type.", promptOK );
}
}
}
function loopChildPackages(thePackage)
{
for (var j = 0 ; j < thePackage.Packages.Count; j++)
{
var child as EA.Package;
child = thePackage.Packages.GetAt(j);
reportPackage(child);
loopChildPackages(child);
}
}
function getParentPath(childPackage)
{
if (childPackage.ParentID != 0)
{
var parentPackage as EA.Package;
parentPackage = Repository.GetPackageByID(childPackage.ParentID);
return getParentPath(parentPackage) + "/" + parentPackage.Name;
}
return "";
}
function reportPackage(thePackage)
{
f.WriteLine("GUID=" + thePackage.PackageGUID + ";"
+ "NAME=" + thePackage.Name + ";"
+ "VCCFG=" + getVCCFG(thePackage) + ";"
+ "XML=" + thePackage.XMLPath + ";"
+ "PARENT=" + getParentPath(thePackage).substring(1) + ";"
);
}
function getVCCFG(thePackage)
{
if (thePackage.IsVersionControlled)
{
var array = new Array();
array = (thePackage.Flags).split(";");
for (var z = 0 ; z < array.length; z++)
{
var pos = array[z].indexOf('=');
if (pos > 0)
{
var key = array[z].substring(0, pos);
var value = array[z].substring(pos + 1);
if (key=="VCCFG")
{
return (value);
}
}
}
}
return "";
}
main();
And the script to do the import:
!INC Local Scripts.EAConstants-JScript
/*
* Script Name : Import List Of SVN Packages
* Author : SDK
* Purpose : Imports a package with all of its sub packages generated from
* "Export List Of SVN Packages" script
* Date : 01 Aug 2013
* HOW TO USE : 1. Get the output file generated by "Export List Of SVN Packages" script
* from your colleague
* 2. Get the XMIs in the SVN local copy
* 3. Change the path to the output file in this script if necessary (var filename).
* By default it is "D:\\EAOutput.txt"
* 4. Change the path to local SVN
* 5. Run the script
*/
var f;
var svnPath;
function main()
{
// CHANGE THE FOLLOWING TWO LINES ACCORDING TO YOUR INPUT AND LOCAL SVN COPY
var filename = "D:\\EAOutput.txt";
svnPath = "D:\\svn.xxx.com\\yyy\\docs\\design\\";
var ForReading = 1, ForWriting = 2, ForAppending = 8;
Repository.EnsureOutputVisible( "Script" );
Repository.ClearOutput( "Script" );
Session.Output("[INFO] Start importing packages from " + filename + ". Please wait...");
var fso = new ActiveXObject("Scripting.FileSystemObject");
f = fso.OpenTextFile(filename, ForReading);
// Read from the file and display the results.
while (!f.AtEndOfStream)
{
var r = f.ReadLine();
parseLine(r);
Session.Output("--------------------------------------------------------------------------------");
}
f.Close();
Session.Output("[INFO] Finished");
}
function parseLine(line)
{
Session.Output("[INFO] Parsing " + line);
var array = new Array();
array = (line).split(";");
var guid;
var name;
var isVersionControlled;
var xmlPath;
var parentPath;
isVersionControlled = false;
xmlPath = "";
for (var z = 0 ; z < array.length; z++)
{
var pos = array[z].indexOf('=');
if (pos > 0)
{
var key = array[z].substring(0, pos);
var value = array[z].substring(pos + 1);
if (key=="GUID") {
guid = value;
} else if (key=="NAME") {
name = value;
} else if (key=="VCCFG") {
if (value != "") {
isVersionControlled = true;
}
} else if (key=="XML") {
if (isVersionControlled) {
xmlPath = value;
}
} else if (key=="PARENT") {
parentPath = value;
}
}
}
// Quick check for target if already exist to speed up process
var targetPackage as EA.Package;
targetPackage = Repository.GetPackageByGuid(guid);
if (targetPackage != null)
{
// target exists, do not do anything
Session.Output("[DEBUG] Target package \"" + name + "\" already exist");
return;
}
var paths = new Array();
var packages = new Array(paths.Count);
for (var i = 0; i < paths.Count; i++)
{
packages[i] = null;
}
paths = (parentPath).split("/");
if (paths.Count < 2)
{
Session.Output("[INFO] Skipped root or level1");
return;
}
packages[0] = selectRoot(paths[0]);
packages[1] = selectPackage(packages[0], paths[1]);
if (packages[1] == null)
{
Session.Output("[ERROR] Cannot find " + paths[0] + "/" + paths[1] + "in Project Browser");
return;
}
for (var j = 2; j < paths.length; j++)
{
packages[j] = selectPackage(packages[j - 1], paths[j]);
if (packages[j] == null)
{
Session.Output("[DEBUG] Creating " + packages[j].Name);
// create the parent package
var parent as EA.Package;
parent = Repository.GetPackageByGuid(packages[j-1].PackageGUID);
packages[j] = parent.Packages.AddNew(paths[j], "");
packages[j].Update();
parent.Update();
parent.Packages.Refresh();
break;
}
}
// Check if name (package to import) already exist or not
var targetPackage = selectPackage(packages[paths.length - 1], name);
if (targetPackage == null)
{
if (xmlPath == "")
{
Session.Output("[DEBUG] Creating " + name);
// The package is not SVN controlled
var newPackage as EA.Package;
newPackage = packages[paths.length - 1].Packages.AddNew(name,"");
Session.Output("New GUID = " + newPackage.PackageGUID);
newPackage.Update();
packages[paths.length - 1].Update();
packages[paths.length - 1].Packages.Refresh();
}
else
{
// The package is not SVN controlled
Session.Output("[DEBUG] Need to import: " + svnPath + xmlPath);
var project as EA.Project;
project = Repository.GetProjectInterface;
var result;
Session.Output("GUID = " + packages[paths.length - 1].PackageGUID);
Session.Output("GUID XML = " + project.GUIDtoXML(packages[paths.length - 1].PackageGUID));
Session.Output("XMI file = " + svnPath + xmlPath);
result = project.ImportPackageXMI(project.GUIDtoXML(packages[paths.length - 1].PackageGUID), svnPath + xmlPath, 1, 0);
Session.Output(result);
packages[paths.length - 1].Update();
packages[paths.length - 1].Packages.Refresh();
}
}
else
{
// target exists, do not do anything
Session.Output("[DEBUG] Target package \"" + name + "\" already exist");
}
}
function selectPackage(thePackage, childName)
{
var childPackage as EA.Package;
childPackage = null;
if (thePackage == null)
return null;
for (var i = 0; i < thePackage.Packages.Count; i++)
{
childPackage = thePackage.Packages.GetAt(i);
if (childPackage.Name == childName)
{
Session.Output("[DEBUG] Found " + childName);
return childPackage;
}
}
Session.Output("[DEBUG] Cannot find " + childName);
return null;
}
function selectRoot(rootName)
{
for (var y = 0; y < Repository.Models.Count; y++)
{
root = Repository.Models.GetAt(y);
if (root.Name == rootName)
{
return root;
}
}
return null;
}
main();

Adobe AIR NativeProcess fails with spaces in arguments?

I have a problem running the NativeProcess if I put spaces in the arguments
if (Capabilities.os.toLowerCase().indexOf("win") > -1)
{
fPath = "C:\\Windows\\System32\\cmd.exe";
args.push("/c");
args.push(scriptDir.resolvePath("helloworld.bat").nativePath);
}
file = new File(fPath);
var nativeProcessStartupInfo:NativeProcessStartupInfo = new NativeProcessStartupInfo();
nativeProcessStartupInfo.executable = file;
args.push("blah");
nativeProcessStartupInfo.arguments = args;
process = new NativeProcess();
process.start(nativeProcessStartupInfo);
in the above code, if I use
args.push("blah") everything works fine
if I use
args.push("blah blah") the program breaks as if the file wasn't found.
Seems like I'm not the only one:
http://tech.groups.yahoo.com/group/flexcoders/message/159521
As one of the users their pointed out, it really seems like an awful limitation by a cutting edge SDK of 21st century. Even Alex Harui didn't have the answer there and he's known to workaround every Adobe bug:)
Any ideas?
I am using AIR 2.6 SDK in JavaScript like this, and it is working fine even for spaces.
please check your code with this one.
var file = air.File.applicationDirectory;
file = file.resolvePath("apps");
if (air.Capabilities.os.toLowerCase().indexOf("win") > -1)
{
file = file.resolvePath(appFile);
}
var nativeProcessStartupInfo = new air.NativeProcessStartupInfo();
nativeProcessStartupInfo.executable = file;
var args =new air.Vector["<String>"]();
for(i=0; i<arguments.length; i++)
args.push(arguments[i]);
nativeProcessStartupInfo.arguments = args;
process = new air.NativeProcess();
process.addEventListener(air.ProgressEvent.STANDARD_OUTPUT_DATA, onOutputData);
process.addEventListener(air.ProgressEvent.STANDARD_INPUT_PROGRESS, inputProgressListener);
process.start(nativeProcessStartupInfo);
To expand on this: The reason that this works (see post above):
var args =new air.Vector["<String>"]();
for(i=0; i<arguments.length; i++)
args.push(arguments[i]);
nativeProcessStartupInfo.arguments = args;
is that air expects that the arguments being passed to the nativeProcess are delimited by spaces. It chokes if you pass "C:\folder with spaces\myfile.doc" (and BTW for AIR a file path for windows needs to be "C:\\folder with spaces\\myfile.doc") you would need to do this:
args.push("C:\\folder");
args.push("with");
args.push("spaces\\myfile.doc");
Hence, something like this works:
var processArgs = new air.Vector["<String>"]();
var path = "C:\\folder with spaces\\myfile.doc"
var args = path.split(" ")
for (var i=0; i<args.length; i++) {
processArgs.push(args[i]);
};
UPDATE - SOLUTION
The string generated by the File object by either nativePath or resolvePath uses "\" for the path. Replace "\" with "/" and it works.
I'm having the same problem trying to call 7za.exe using NativeProcess. If you try to access various windows directories the whole thing fails horribly. Even trying to run command.exe and calling a batch file fails because you still have to try to pass a path with spaces through "arguments" on the NativeProcessStartupInfo object.
I've spent the better part of a day trying to get this to work and it will not work. Whatever happens to spaces in "arguments" totally destroys the path.
Example 7za.exe from command line:
7za.exe a MyZip.7z "D:\docs\My Games\Some Game Title\Maps\The Map.map"
This works fine. Now try that with Native Process in AIR. The AIR arguments sanitizer is FUBAR.
I have tried countless ways to put in arguments and it just fails. Interesting I can get it to spit out a zip file but with no content in the zip. I figure this is due to the first argument set finally working but then failing for the path argument.
For example:
processArgs[0] = 'a';
processArgs[1] = 'D:\apps\flash builder 4.5\project1\bin-debug\MyZip.7z';
processArgs[2] = 'D:\docs\My Games\Some Game Title\Maps\The Map.map';
For some reason this spits out a zip file named: bin-debugMyZip.7z But the zip is empty.
Whatever AIR is doing it is fraking up path strings. I've tried adding quotes around those paths in various ways. Nothing works.
I thought I could fall back on calling a batch file from this example:
http://technodesk.wordpress.com/2010/04/15/air-2-0-native-process-batch-file/
But it fails as well because it still requires the path to be passed through arguments.
Anyone have any luck calling 7z or dealing with full paths in the NativeProcess? All these little happy tutorials don't deal with real windows folder structure.
Solution that works for me - set path_with_space as "nativeProcessStartupInfo.workingDirectory" property. See example below:
public function openPdf(pathToPdf:String):void
}
var nativeProcessStartupInfo:NativeProcessStartupInfo = new NativeProcessStartupInfo();
var file:File = File.applicationDirectory.resolvePath("C:\\Windows\\System32\\cmd.exe");
nativeProcessStartupInfo.executable = file;
if (Capabilities.os.toLowerCase().indexOf("win") > -1)
{
nativeProcessStartupInfo.workingDirectory = File.applicationDirectory.resolvePath(pathToPdf).parent;
var processArgs:Vector.<String> = new Vector.<String>();
processArgs[0] = "/k";
processArgs[1] = "start";
processArgs[2] = "test.pdf";
nativeProcessStartupInfo.arguments = processArgs;
process = new NativeProcess();
process.start(nativeProcessStartupInfo);
process.addEventListener(ProgressEvent.STANDARD_OUTPUT_DATA, onOutputData);
}
args.push( '"blah blah"' );
Command line after all supports spaces if they are nested whithin "".
So if lets say you have a file argument :
'test/folder with space/blah'
Convert it to the following
'test/"folder with space"/blah'
Optionally use a filter:
I once had a problem like this in AIR, i just simply filter the text before i push it into the array. My refrence use CASA lib though
import org.casalib.util.ArrayUtil;
http://casalib.org/
/**
* Filters a string input for 'safe handling', and returns it
**/
public function stringFilter(inString:String, addPermitArr:Array = null, permitedArr:Array = null):String {
var sourceArr:Array = inString.split(''); //Splits the string input up
var outArr:Array = new Array();
if(permitedArr == null) {
permitedArr = ("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890" as String).split('');
}
if( addPermitArr != null ) {
permitedArr = permitedArr.concat( addPermitArr );
}
for(var i:int = 0; i < sourceArr.length; i++) {
if( ArrayUtil.contains( permitedArr, sourceArr[i] ) != 0 ) { //it is allowed
outArr.push( sourceArr[i] );
}
}
return (outArr.join('') as String);
}
And just filter it via
args.push( stringFilter( 'blah blah', new Array('.') ) );
Besides, it is really bad practice to use spaces in file names / arguments, use '_' instead. This seems to be originating from linux though. (The question of spaces in file names)
This works for me on Windws7:
var Xargs:Array = String("/C#echo#a trully hacky way to do this :)#>#C:\\Users\\Benjo\\AppData\\Roaming\\com.eblagajna.eBlagajna.POS\\Local Store\\a.a").split("#");
var args:Vector.<String> = new Vector.<String>();
for (var i:int=0; i<Xargs.length; i++) {
trace("Pushing: "+Xargs[i]);
args.push(Xargs[i]);
};
NPI.arguments = args;
If your application path or parameter contains spaces, make sure to wrap it in quotes. For example path of the application has spaces C:\Program Files (x86)\Camera\Camera.exe use quotes like:
"C:\Program Files (x86)\Camera\Camera.exe"