I'd like to run a bash shell in a 'normal' document window in Slick Edit.
At a minimum I'd be content with running a command and having all output captured into a document window. Better would be the ability to interactively work with the shell in that window.
This is a bit crude, but it is what I use for launching external programs
(including bash scripts). When I used to work in Win+Cygwin I also had a wrapper around the bash script but I forget why I needed that.
but remember that you can always tie specific actions to your project
(build, compile, etc), you can always add your own as well Project->Properties->Tools->New. All those commands can execute in the process window
#include "slick.sh"
static _str mytmp='/tmp/myvstmp.txt'
_command git_annotate(_str filename='') name_info(',' VSARG2_MACRO )
{
if (filename=='') {
filename=p_buf_name;
}
curr_line=p_line;
delete_file(mytmp); // make sure we dont get old file
if( file_match(mytmp,'1')==mytmp ) {
message('Tmp file delete failed! ('mytmp') change permissions and if still failing - restart vs');
return 1
}
shell('/usr/bin/git blame -s 'filename' | sed "s#^\(.\{8\}\) [^)]*) #\1 #" >'mytmp, 'p');
if( file_match(mytmp,'1')!=mytmp ) {
message('Annotate failed');
return 1
}
status=edit('-w 'mytmp);
if (status) {
message('Error opening output file for display.');
return 1
}
goto_line(curr_line+1);
// keep disk clean
p_buf_flags |= VSBUFFLAG_THROW_AWAY_CHANGES;
name("* annotate output *" filename, false);
delete_file(mytmp);
}
Related
Sorry, I couldn't come up with a better title. My problems is the following: If I execute a Tcl proc, I can wrap the execution in catch to catch and process errors. I do this in my code to have the same error output everywhere. However, my program provides numerous procs which the user can use in scripts (mostly at the outermost level), and it would be cumbersome if the user would have to wrap every one of them in a catch. I could of course use an additional level of indirection in each of those commands, but I wanted to ask whether there is a way to catch errors from all commands executed, without explicitly using catch on each invocation?
Thanks for your help!
Read this paragraph first: ↓
The single most important principle of error handling is don't throw away errors unless you know for sure that that's the correct way to handle them. Doing so just because they're unsightly is very bad! (Logging them is far better.)
The closest you can get is to run the whole of your existing code inside a catch or try. You can put a source inside that, so a little driver script to retrofit on existing code is just something like:
set argc [llength [set argv [lassign $argv argv0]]]
catch {source $argv0}
Assuming you're applying it during the call to the overall script. You also might need to set up interp bgerror:
interp bgerror {} {apply {{msg opt} {
if {[lindex [dict get $opt -errorcode] 0] eq "EXPECTED"} {
# Ignore this
} else {
# Unexpected error; better tell the user
puts "ERROR: $msg"
puts [dict get $opt -errorinfo]
}
}}}
It's not a really good idea to do this though. If you hide all the errors, how will you find and fix any errors? Using try is better, since that lets you hide only expected errors:
try {
source $argv0
} trap EXPECTED {} {
# Ignore this
}
and I'd probably wrap things up so that I have local variables:
apply {{} {
global argv0 argv argc
set argc [llength [set argv [lassign $argv argv0]]]
try {
uplevel #0 [list source $argv0]
} trap EXPECTED {msg} {
# Log this; the logging engine is out of scope for the question
log DEBUG $msg
}
}}
You'll need to experiment to see what to trap.
how would it be possible to bypass functions that are not existing in DM
such that the main code would still run? Try/catch does not seem to work, e..g
image doSomething(number a,number b)
{
try
{
whateverfunction(a,b)
}
catch
{
continue
}
}
number a,b
doSomething(a,b)
Also conditioning wont work, e.g..
image doSomething(number a,number b)
{
if(doesfunctionexist("whateverfunction"))
{
whateverfunction(a,b)
}
}
number a,b
doSomething(a,b)
thanks in advance!
As "unknown" commands are caught by the script-interpreter, there is no easy way to do this. However, you can construct a workaround by using ExecuteScriptCommand().
There is an example tutorial to be found in this e-book, but in short, you want to do something like the following:
String scriptCallStr = "beep();\n"
scriptCallStr = "MyUnsaveFunctionCall();\n"
number exitVal
Try { exitVal = ExecuteScriptString(scriptCallStr ); }
Catch { exitVal = -1; break; }
if ( -1 == exitVal )
{
OKDialog("Sorry, couldn't do:\n" + scriptCallStr )
}
else
{
OKDialog( "All worked. Exit value: " + exitVal )
}
This works nicely and easy for simple commands and if your task is only to "verify" that a script could run.
It becomes clumsy, when you need to pass around parameters. But even then there are ways to do so. (The 'outer' script could create an object and pass the object-ID per string. Similarly, the 'inner' script can do the same and return the script-object ID as exit-value.)
Note: You can of course also put doesfunctionexist inside the test-script, if you do only want to have a "safe test", but don't actually want to execute the command.
Depending on what you need there might also be another workaround solution: Wrapper-functions in a library. This can be useful if you want to run the same script on different PCs with some of which having the functionality - most likely some microscope - while others don't.
You can make your main-script use wrapper methods and then you install different versions of the wrapper method script scripts as libraries.
void My_SpecialFunction( )
{
SpecialFunction() // use this line on PCs which have the SpecialFunction()
DoNothing() // use alternative line on PCs which don't have the SpecialFunction()
}
My_SpecialFunction( )
I have used this in the past where the same functionality (-stage movement-) required different commands on different machines.
I have a Gradle script similar to this:
ext {
dir = null
}
task init << {
build()
}
task buildAll(type: Exec){
workingDir ext.dir
commandLine 'cmd', '/c', "echo %JAVA_HOME%"
}
def build(){
ext.dir = "asdf"
buildAll.execute()
}
When I run the script, I get:
groovy.lang.MissingPropertyException: Cannot get property 'dir' on extra properties extension as it does not exist
Whatever I tried, I couldn't get a task to read a property from "ext". It can be seen from methods (like "build()" in my example), but not from any other task except the default one ("init" in my example).
I understand that the "ext" properties should be accessible from anywhere inside the project, so what am I doing wrong?
UPDATE:
The workflow I'm trying to achieve (as asked by Opal):
I have several environments I need to build with one script. Each of these environments is listed in a CSV file with a line: <environment>,<version>.
Script then needs to do the following:
Delete existing directory
Checkout code from SVN into new directory (both directory and SVN url depend on environment and version)
Copy some settings files (paths depend on version)
Edit some of those settings files (values depend on environment and version)
Set some environment variables (JAVA_HOME, ANT_HOME...) (depends on version)
Run three build commands (${ANT_HOME}/bin/ant -f $checkedOutCodeDirectory/Build/build-all.xml target1, then target2 and target3)
This needs to be executed for each environment
Extra properties should be created via ext but referred via project instance without any instance at all so: project.dir or dir, so the first change to script will be:
ext {
dir = null
}
task init << {
build()
}
task buildAll(type: Exec){
workingDir dir // ext.dir -> dir
commandLine 'cmd', '/c', "echo %JAVA_HOME%"
}
def build(){
ext.dir = "asdf"
buildAll.execute()
}
Now, before any task or method is executed the script is read and parsed, sot the whole body of buildAll will be configured before any other part is run. Thus it will always fail, since dir property has no value. Proof:
ext {
dir = null
}
task init << {
build()
}
task buildAll(type: Exec){
workingDir dir ? dir : project.rootDir
commandLine 'cmd', '/c', "echo %JAVA_HOME%"
}
def build(){
ext.dir = "asdf"
buildAll.execute()
}
I've been trying to get typescript building via the build servers on visualstudio.com, and I've done the normal thing of bringing typescript into source control. But I'm getting the following issue:
VSTSC : error TS5007: Build:
Cannot resolvereferenced file:
'COMPUTE_PATHS_ONLY'.
[C:\a\src\Main\RecruitCloud\RecruitCloud.csproj]
I'm aware of the encoding issues, but in all the examples I've seen the culprit file has been named in the error message.
I'm starting to think this could be down to the number of typescript files I'm compiling in the project.
Any ideas?
This is a configuration option for the VsTsc task, the one that runs the compiler. It is used in the PreComputeCompileTypeScript target. The intention is to make the VsTsc task go through all the motions, except to run the compiler. That didn't pan out on your machine, it actually did run the compiler. Which then threw a fit since it can't find a file named COMPUTE_PATHS_ONLY.
The VsTsc task is stored in C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v12.0\TypeScript\TypeScript.Tasks.dll. Looking at the assembly with a decompiler:
protected override int ExecuteTool(string pathToTool, string responseFileCommands, string commandLineCommands)
{
if (this.Configurations.Contains("--sourcemap"))
{
this.generateSourceMaps = true;
}
else
{
this.generateSourceMaps = false;
}
if (this.Configurations.Contains("--declaration"))
{
this.generateDeclarations = true;
}
else
{
this.generateDeclarations = false;
}
this.GenerateOutputPaths();
if (!responseFileCommands.Contains("COMPUTE_PATHS_ONLY"))
{
return base.ExecuteTool(pathToTool, responseFileCommands, commandLineCommands);
}
return 0;
}
Note the !responseFileCommands.Contains() test to bypass the base.ExecuteTool() call.
All I can guess is that the method doesn't look like this on your machine. With the most likely cause that you have an outdated version of TypeScript.Tasks.dll. On my machine with VS2013 Update 4 installed it is dated Nov 11, 2014 with a size of 27816 bytes.
Your best bet would be to simply resave all the files in Unicode encoding. You can do it from a quick powershell script (Change files' encoding recursively on Windows?)
Get-ChildItem *.txt | ForEach-Object {
$content = $_ | Get-Content
Set-Content -PassThru $_.Fullname $content -Encoding UTF8 -Force}
I want to run PhantomJs scripts from my program, but since the scripts may not be written by me, I need to make sure PhantomJs exits after the execution are either completed or fails for any reason (e.g., invalid syntax, timeout, etc). So far, All I've read says you must always include the instruction phantom.exit() for PhantomJs to exit. Is there any way to automatically close PhantomJs after it executes a given script?
Thanks.
Create a file run-javascript.js:
var system = require('system');
try {
for (var i=1; i<system.args.length; i++) {
var scriptFileName = system.args[i];
console.log("Running " + scriptFileName + " ...");
require(scriptFileName);
}
}
catch(error) {
console.log(error);
console.log(error.stack);
}
finally {
phantom.exit();
}
Then to run your file myscript.js:
phantomjs run-javascript.js ./myscript.js
You have to include an explicit path for the myscript.js, i.e. ./myscript.js, otherwise phantomjs will look for the script as a module.
There are three execution scenarios that are handled here:
Successful execution, in which case phantom.exit() is called in the finally clause.
Error in the script being run, in which case the require function prints a stacktrace and returns (without throwing any error to the calling code).
Error running the script (e.g. it doesn't exist), in which case the catch clause prints out the stacktrace and phantom.exit() is called in the finally clause.