Writing a table to a file using D3 pick - pick

In D3, suppose I have a file called foo and I want to write the contents of the file out to /var/tmp/bar. The documentation leads me to believe that it should be possible to make D3 write the file to the file system by changing the D pointer into a Q pointer, but I can't figure out how to make this happen.

You can do this at least a few ways.
1) You don't want to change a d-pointer to a q-pointer, you just want to create a q-pointer. In other words there's no need to have a d-pointer first to access the host file system. So your q-pointer called 'bar' will look like this:
Q
/var/tmp/bar
With that you can simply:
copy foo
to: (bar
Note that in this case 'bar' is a host OS folder/directory, not a file. A D3 'file' is a table that has multiple records. That translates to a host OS directory with multiple files.
Options are available on the Copy command to suppress the display of item IDs (keys) as records are copied (see docs).
2) You don't even need a q-pointer:
copy foo
to: (/var/tmp/bar
3) Similarly in code you can use the q-pointer or you can use the direct path:
open 'bar' to f.bar1 ...
open '/var/tmp/bar' to f.bar2 ...
==
The path syntax is using a mechanism called the OSFI (see docs). With this syntax you can specify a driver. The default driver called "unix:" converts attribute marks to the *nix EOL which is a line-feed x0A. If you're on Windows the default is "dos:" which converts attribute marks to CRLF x0D0A. You can force a non-default by preceding the path with the driver. So to create a DOS-format file in Unix/Linux, use dos:/var/tmp/bar. The default drivers also convert between tab and 4-spaces (see docs). Values and subvalues are not converted but a new driver can be created to do so. Use the 'bin:' driver to avoid conversions, so bin:/var/tmp/bar will not convert #am (xFE) to x0A, etc.
If you need more detail I'll be happy to add to this.

Related

Getting Error for Excel to Table Conversion

I just started learning Python and now I'm trying to integrate that with my GIS knowledge. As the title suggests, I'm attempting to convert an Excel sheet to a table but I keep getting errors, one which is wholly undecipherable to me and the other which seems to be suggesting that my file does not exist which, I know is incorrect since I copied it's location directly from it's properties.
Here is a screenshot of my environment. Please help if you can and thanks in advance.
Environment/Error
Simply set, you put the workspace directory inside the filename variable so when arcpy handles it, it tries to acess a file that does not exist, in an unknown workspace.
Try this.
arcpy.env.workspace = "J:\egis_work\dpcd\projects\SHARITA\Python\"
arcpy.ExcelToTable_conversion("Exceltest.xlsx", "Bookstorestable", "Sheet1")
Arcpy uses the following syntax to convert geodatabase tables to excel
It is straight forward.
Example
Excel tables cannot be stored in the geodatabase. Most reasonable thing is to store them in the rootfolder in which the geodatabase with the table is. Say I want to convert table below into excel and save it in the root folder or in the folder in which the geodatabase is.
I will go as follows: I have put the explanations after the #.
import arcpy
import os
from datetime import datetime, date, time
# Set environment settings
in_table= r"C:\working\Sunderwood\Network Analyst\MarchDistances\Centroid.gdb\SunderwoodFirstArcpyTable"
#os.path.basename(in_table)
out_xls= os.path.basename(in_table)+ datetime.now().strftime('%Y%m%d') # Here
#os.path.basename(in_table)- Gives the base name of pathname. In this case, it returns the name table
# + is used in python to concatenate
# datetime.now()- gives todays date
# Converts todays date into a string in the format YYYMMDD
# Please add all the above statements and you notice you have a new file name which is the table you input plus todays date
#os.path.dirname() method in Python is used to get the directory name from the specified path
geodatabase = os.path.dirname(in_table)
# In this case, os.path.dirname(in_table) gives us the geodatabase
# The The join() method takes all items in an iterable and joins them into one string
SaveInFolder= "\\".join(geodatabase.split('\\')[:-1])
# This case, I tell python take \ and join on the primary directory above which I have called geodatabase. However, I tell it to remove some characters. I will explain the split below.
# I use split method. The split() method splits a string into a list
#In the case above it splits into ['W:\\working\\Sunderwood\\Network', 'Analyst\\MarchDistances\\Centroid.gdb']. However, that is not what I want. I want to remove "\\Centroid.gdb" so that I remain with the follwoing path ['W:\\working\\Sunderwood\\Network', 'Analyst\\MarchDistances']
#Before I tell arcpy to save, I have to specify the workspace in which it will save. So I now make my environment the SaveInFolder
arcpy.env.workspace =SaveInFolder
## Now I have to tell arcpy what I will call my newtable. I use os.path.join.This method concatenates various path components with exactly one directory separator (‘/’) following each non-empty part except the last path component
newtable = os.path.join(arcpy.env.workspace, out_xls)
#In the above case it will give me "W:\working\Sunderwood\Network Analyst\MarchDistances\SunderwoodFirstArcpyTable20200402"
# You notice the newtable does not have an excel extension. I resort to + to concatenate .xls onto my path and make it "W:\working\Sunderwood\Network Analyst\MarchDistances\SunderwoodFirstArcpyTable20200402.xls"
table= newtable+".xls"
#Finally, I call the arcpy method and feed it with the required variables
# Execute TableToExcel
arcpy.TableToExcel_conversion(in_table, table)
print (table + " " + " is now available")

stat function for perl6

Is there an alternate way in perl6 to get file attribute details like size, access_time, modified_time.. etc. without having to invoke native call?
As per the doc it is "unlikely to be implemented as a built in as its POSIX specific".
What workaround options are available excluding the system call to stat?
Any ideas or pointers are much appreciated.
Thanks.
See the IO::Path doc.
For example:
say 'foo'.IO.s; # 3 if 'foo' is an existing file of size 3 bytes
.IO on a string creates an IO::Path object corresponding to the filesystem entry corresponding to the path given by the string.
See examples of using junctions to get multiple attributes at the same time at the doc on ACCEPTS.
I'm not sure if the following is too much. Ignore it if it is. Hopefully it's helpful.
You can discover/explore some of what's available in Perl 6 via its HOW objects (aka Higher Order Workings objects, How Objects Work objects, metaobjects -- whatever you want to call them) which know HOW objects of a particular type work.
say IO::Path.^methods
displays:
(BUILD new is-absolute is-relative parts volume dirname basename extension
Numeric sibling succ pred open watch absolute relative cleanup resolve
parent child add chdir rename copy move chmod unlink symlink link mkdir
rmdir dir slurp spurt lines comb split words e d f s l r w rw x rwx z
modified accessed changed mode ACCEPTS Str gist perl IO SPEC CWD path BUILDALL)
Those are some of the methods available on an IO::Path object.
(You can get more or less with adverbs, eg. say IO::Path.^methods(:all), but the default display aims at giving you the ones you're likely most interested in. The up arrow (^) means the method call (.methods) is not sent to the invocant but rather is sent "upwards", up to its HOW object as explained above.)
Here's an example of applying some of them one at a time:
spurt 'foo', 'bar'; # write a three letter string to a file called 'foo'.
for <e d f s l r w rw x rwx z modified accessed changed mode>
-> $method { say 'foo'.IO."$method"() }
The second line does a for loop over the methods listed by their string names in the <...> construct. To call a method on an invocant given it's name in a variable $qux, write ."$qux"(...).
While looking for an answer to this question in 2021, there is the File::Stat module. It provides some additional stat(2) information such as UID, GID and mode.
#!/usr/bin/env raku
use File::Stat <stat>;
say File::Stat.new(path => $?FILE).mode.base(8);
say stat($?FILE).uid;
say stat($?FILE).gid;

Can I create Variable Names from Constants in Objective-C/Swift?

This question is related to Swift and Objective-C.
I want to create variables from Constant Strings. So, in future, when I change name of a variable though out app, I just need to change it at one place, it must be changed, wherever it is used.
Example:
I have user_id in 14 files, if I want to change user_id into userID I have to change in all 14 files, but I want to change at once place only.
One way to do this would be to use the Xcode build process and add a script (language can be of your choice, but the default is a BASH script)
Create string constant text file where you define all your variables you want to change in some format that expresses the change you want to make, for example:
"variable_one_name" = "new_variable_one_name"
Depending on how 'smart' you wanted your script to be you could also list all your variables and include some way of indicating when a variable is not to be replaced.
"variable_one_name" = "new_variable_one_name"
"variable_two_name" = "DO_NOT_CHANGE"
Run a pre build script on you project that reads in the string constant text file and then iterates through your source files and executes the desired replacement. Be careful to limit the directories you search to you OWN source files!
build project...
This would allow you to manage your constants from one place. However it clearly is only going to help you after you have created a project and written some code :)
BASH string replacement
Adding a run script to the Xcode build process

Execute scripts by relative path in Oracle SQL Developer

First, this question relates to Oracle SQL Developer 3.2, not SQL*Plus or iSQL, etc. I've done a bunch of searching but haven't found a straight answer.
I have several collections of scripts that I'm trying to automate (and btw, my SQL experience is pretty basic and mostly MS-based). The trouble I'm having is executing them by a relative path. for example, assume this setup:
scripts/A/runAll.sql
| /A1.sql
| /A2.sql
|
/B/runAll.sql
/B1.sql
/B2.sql
I would like to have a file scripts/runEverything.sql something like this:
##/A/runAll.sql
##/B/runAll.sql
scripts/A/runAll.sql:
##/A1.sql
##/A2.sql
where "##", I gather, means relative path in SQL*Plus.
I've fooled around with making variables but without much luck. I have been able to do something similar using '&1' and passing in the root directory. I.e.:
scripts/runEverything.sql:
#'&1/A/runAll.sql' '&1/A'
#'&1/B/runAll.sql' '&1/B'
and call it by executing this:
#'c:/.../scripts/runEverything.sql' 'c:/.../scripts'
But the problem here has been that B/runAll.sql gets called with the path: c:/.../scripts/A/B.
So, is it possible with SQL Developer to make nested calls, and how?
This approach has two components:
-Set-up the active SQL Developer worksheet's folder as the default directory.
-Open a driver script, e.g. runAll.sql, (which then changes the default directory to the active working directory), and use relative paths within the runAll.sql script to call sibling scripts.
Set-up your scripts default folder. On the SQL Developer toolbar, Use this navigation:
Tools > Preferences
In the preference dialog box, navigate to Database > Worksheet > Select default path to look for scripts.
Enter the default path to look for scripts as the active working directory:
"${file.dir}"
Create a script file and place all scripts associated in it:
runAll.sql
A1.sql
A2.sql
The content of runAll.sql would include:
#A1.sql;
#A2.sql;
To test this approach, in SQL Developer, click on File and navigate and open the script\runAll.sql file.
Next, select all (on the worksheet), and execute.
Through the act of navigating and opening the runAll.sql worksheet, the default file folder becomes "script".
I don't have access to SQL Developer right now so i can't experiment with the relative paths, but with the substitution variables I believe the problem you're seeing is that the positional variables (i.e. &1) are redefined by each start or #. So after your first #runAll, the parent script sees the same &1 that the last child saw, which now includes the /A.
You can avoid that by defining your own variable in the master script:
define path=&1
#'&path/A/runAll.sql' '&path/A'
#'&path/B/runAll.sql' '&path/B'
As long as runAll.sql, and anything that runs, does not also (re-define) path this should work, and you just need to choose a unique name if there is the risk of a clash.
Again I can't verify this but I'm sure I've done exactly this in the past...
you need to provide the path of the file as String , give the patch in double quote it will work
**
For Example
#"C:\Users\Arpan Saini\Zions R2\Reports Statements and Notices\Patch\08312017_Patch_16.2.3.17\DB Scripts\snsp.sql";
**
Execution of Sql
#yourPath\yourFileName.sql
How to pass parameters in file
#A1.sql; (Parameter)
#A2.sql; (Parameter)
This is not absolute or relative path issue. It's the SQL interpreter issue, where by default it will look for files which are having .sql extention.
Please make sure to modify the file name to file_name.sql
Ex: if workspace is having file name called "A", then move the file from A to "A.sql"

Find merge arrows pointing to a version in ClearCase

I want to find all the merge arrows pointing to a certain version in a script. When I describe the version of the element with the following command:
ct describe filename##/main/some_branch/3
I get in the result the following:
Hyperlinks:
Merge <- filename##/main/other_branch/2
I want ct describe to output only the relevant information to be used in my script, ie. the versions where the merge arrows come from. In my case, the output should look simply like this:
filename##/main/other_branch/2
I didn't find any relevant parameters in the -fmt from the man page. Is there any way of doing it?
The only option in the fmt_ccase man page would be
%[hlink:filter]p
Displays the hyperlink source and target, with an arrow pointing from the source to the target. The optional H argument lists only the hyperlink names.
You can optionally specify a filter string, preceded by a colon. This filter if present, restricts the output to names that match the filter string. Case is considered when matching the string.
If this doesn't work, you have to resort to grep/awk commands in order to extract those version from the cleartool describe output.
The cleartool descr -ahlink restricts a bit the output.
–ahlink
The listing includes the path names of the objects hyperlinked to pname, annotated with → (listed object is the to- object) or ← (listed object is the from-object).
For example:
-> M:\gamma\vob1\proj\include\db.c##\main\52 <- M:\gamma\vob1\proj\bin\vega##\main\5
Beside the full script option, you can have a look at external third-party tools like R&D Reporter, which can vizualize and export those same hyperlinks.
However:
this is a commercial tool
depending on the export output and what you want, you might end up parsing just another output to extract what you need.
For more on that tool, contact Tamir Gefen.