I struggle to get the full repository path which kettle is using. Reading the help on variables (link) states that I could use either "Internal.Transformation.Repository.Directory" or "${Internal.Job.Repository.Directory}" depending on if it is a job or a transformation. This actually works and returns the path to the file with the repo as root.
Since I need the "full path" to the file (or the repo) - I tried the "${Internal.Job.Filename.Directory}" (and the transformation one) but this returns the string "Parent Job File Directory" (see below).
As additional information - I am connected to a repo.
Mission - How to get the full path to the file (job/transformation)?
The directory of the caller of the transformation is the ${Internal.Entry.Current:Directory}. This is because a job can contains a job so what matters is the directory of the entry in the job.
Note: If you run the transformation directly, rather that call it from a job, the Internal.Job.Filename.File will be the generic "Parent Job File Name" rather that "".
Related
I am facing an odd issue where my lookup is returning a filenotfound error when I use a wildcard path. If I specify and exact file path, the lookup runs without error. However, if I replace the filename with a *, I get a filenotfound error.
The file is Data_643.json, located in my Azure Data Lake Storage Gen2, under the labournavigatorfile system. The exact file path is:
labournavigatorfile/raw_data/Scraped/HeadHunter/Saudi_Arabia/Data_643.json.
If I put this exact path into the Integration dataset configuration, the pipeline runs without issue. However, as soon as I replace the 'Data_643.json' with a '*', the pipeline crashes with a filenotfound error.
What am I doing wrong? Many Thanks for any support. This must be something very simple that I am missing.
Exact path works:
Wildcrad path throws error:
I have 3 files in my container as file1.json, file2.json, file3.json as shown below:
The following is how I configured my dataset to read using wildcard with configuration same as in the image provided in the question.
When I used this in lookup I got the same error:
To overcome this, go to your lookup activity. When you want to use wildcards to read a file/files, check the wildcard file path option. Then specify the folder structure and use wildcard where required. The following is an image for reference.
The following is the debug output when I run the pipeline (Each of my files had 10 rows):
Is it possible to get the full path of the current file within a live template in IntelliJ? I've tried using groovyScript("new File('.').absolutePath") function, but that returns /Applications/IntelliJ IDEA.app/Contents/bin/. and not the file path as I was hoping for.
Thanks!
According to the docs (emphasis mine):
You can use groovyScript macro with multiple arguments. The first argument is a script text that is executed or a path to the file that contains a script. The next arguments are bound to _1, _2, _3, ..._n variables that are available inside your script. Also, _editor variable is available inside the script. This variable is bound to the current editor.
The _editor is an instance of EditorImpl which holds a reference to the VirtualFile that represents the currently opened file.
Therefore, the following script gets the full path of currently opened file.
groovyScript("_editor.getVirtualFile().getPath()")
Or if you want to get the path relative to the project's root:
groovyScript("_editor.getVirtualFile().getPath().replace(_editor.getProject().getBaseDir().getPath(), \"\")")
Since IntelliJ IDEA 2019.3 the Live Template macros filePath() and fileRelativePath() are available. A complicated Groovy script macro is no longer required.
I have two files in two different floder locations in Trace32. I execute cd.do file_name subroutine_name in Trace32. The trace32 takes the location of first command executed as the folder from which the following commands needs to be executed. How can I execute the routines from two different folders.
There is a pretty good guide here on how to script in Trace32.
http://www2.lauterbach.com/pdf/practice_user.pdf
I do not understand why you need to have them in two different folders, shouldn't it be solved by just have it in the same folder?
Well, maybe you should simply use DO <myscript.cmm> instead of CD.DO <myscript.cmm>.
DO <myscript.cmm> executes the script at the given location but keeps the current working path.
CD.DO <myscript.cmm> changes the working path to the location of the given script and then executes the script.
However I would recommend to write your scripts in a way that it doesn't matter if they are called with CD.DO or just DO. You can achieve that with either absolute paths or with paths relative to the script locations. (I prefer the 2nd one.)
So imagine the following file structure:
C:\t32\myscripts\start.cmm
C:\t32\myscripts\folder1\routines.cmm
C:\t32\myscripts\folder2\loadapp.cmm
C:\t32\myscripts\folder2\application.elf
You can cope this structure with absolute paths like that:
start.cmm:
DO "C:/t32/myscripts/folder1/routines.cmm" subroutine_A
DO "C:/t32/myscripts/folder2/loadapp.cmm"
folder2/loadapp.cmm:
Data.LOAD.Elf "C:/t32/myscripts/folder2/application.elf"
DO "C:/t32/myscripts/folder1/routines.cmm" subroutine_B
With relative paths you could use the prefix "~~~~" before accessing other files relative from the location of the currently executed PRACTICE script. The "~~~~" is replaced with the path of the currently executed script (just like "~" stands for your home directory.) There is also a function OS.PPD() which gives you the directory of the currently executed PRACTICE script.
So above situation with relative paths look like that:
start.cmm:
DO "~~~~/folder1/routines.cmm subroutine_A"
DO "~~~~/folder2/loadapp.cmm"
folder2/loadapp.cmm:
Data.LOAD.Elf "~~~~/application.elf"
DO "~~~~/../folder1/routines.cmm" subroutine_B
I'm trying to load group of folders files in one time with when
i set
sourceURI = 'gs://ybbi/bi_landing_zone/files_to_load/app/reports/app_network_analytics_report/201409011*'
all the folders that i'm want to load start with 20140911
but i get the error:
ERROR: Invalid path: gs://ybbi/bi_landing_zone/files_to_load/apn/reports/appnexus_network_analytics_report/20140901191111_3bab8ec0_092a_43de_a157_db35d1555ea0/
20140901191111_3bab8ec0_092a_43de_a157_db35d1555ea0 is one of these folders(don't know why it's print the all folder name of this specific folder)
in some other folder tree cases it's works, but in this specific folder tree it's return the same error .
i know that cloud storage don't have real folders and it's part of the name of the object, but you understand what i mean.
is it bug?
Without more information, what it looks like is that you have a object file called gs://ybbi/bi_landing_zone/files_to_load/apn/reports/appnexus_network_analytics_report/20140901191111_3bab8ec0_092a_43de_a157_db35d1555ea0/ that is not a csv/json file. Some tools may create these dummy files in order to simulate directories. BigQuery requires all objects that match the input glob path to be importable files.
One solution would be to change the glob path to include a narrower set of files. You can pass multiple paths if that makes things easier. For example, you could pass
gs://ybbi/bi_landing_zone/files_to_load/apn/reports/appnexus_network_analytics_report/20140901191111_3bab8ec0_092a_43de_a157_db35d1555ea0/*
and
gs://ybbi/bi_landing_zone/files_to_load/apn/reports/appnexus_network_analytics_report/20140901191111_some_other_path/*
First, this question relates to Oracle SQL Developer 3.2, not SQL*Plus or iSQL, etc. I've done a bunch of searching but haven't found a straight answer.
I have several collections of scripts that I'm trying to automate (and btw, my SQL experience is pretty basic and mostly MS-based). The trouble I'm having is executing them by a relative path. for example, assume this setup:
scripts/A/runAll.sql
| /A1.sql
| /A2.sql
|
/B/runAll.sql
/B1.sql
/B2.sql
I would like to have a file scripts/runEverything.sql something like this:
##/A/runAll.sql
##/B/runAll.sql
scripts/A/runAll.sql:
##/A1.sql
##/A2.sql
where "##", I gather, means relative path in SQL*Plus.
I've fooled around with making variables but without much luck. I have been able to do something similar using '&1' and passing in the root directory. I.e.:
scripts/runEverything.sql:
#'&1/A/runAll.sql' '&1/A'
#'&1/B/runAll.sql' '&1/B'
and call it by executing this:
#'c:/.../scripts/runEverything.sql' 'c:/.../scripts'
But the problem here has been that B/runAll.sql gets called with the path: c:/.../scripts/A/B.
So, is it possible with SQL Developer to make nested calls, and how?
This approach has two components:
-Set-up the active SQL Developer worksheet's folder as the default directory.
-Open a driver script, e.g. runAll.sql, (which then changes the default directory to the active working directory), and use relative paths within the runAll.sql script to call sibling scripts.
Set-up your scripts default folder. On the SQL Developer toolbar, Use this navigation:
Tools > Preferences
In the preference dialog box, navigate to Database > Worksheet > Select default path to look for scripts.
Enter the default path to look for scripts as the active working directory:
"${file.dir}"
Create a script file and place all scripts associated in it:
runAll.sql
A1.sql
A2.sql
The content of runAll.sql would include:
#A1.sql;
#A2.sql;
To test this approach, in SQL Developer, click on File and navigate and open the script\runAll.sql file.
Next, select all (on the worksheet), and execute.
Through the act of navigating and opening the runAll.sql worksheet, the default file folder becomes "script".
I don't have access to SQL Developer right now so i can't experiment with the relative paths, but with the substitution variables I believe the problem you're seeing is that the positional variables (i.e. &1) are redefined by each start or #. So after your first #runAll, the parent script sees the same &1 that the last child saw, which now includes the /A.
You can avoid that by defining your own variable in the master script:
define path=&1
#'&path/A/runAll.sql' '&path/A'
#'&path/B/runAll.sql' '&path/B'
As long as runAll.sql, and anything that runs, does not also (re-define) path this should work, and you just need to choose a unique name if there is the risk of a clash.
Again I can't verify this but I'm sure I've done exactly this in the past...
you need to provide the path of the file as String , give the patch in double quote it will work
**
For Example
#"C:\Users\Arpan Saini\Zions R2\Reports Statements and Notices\Patch\08312017_Patch_16.2.3.17\DB Scripts\snsp.sql";
**
Execution of Sql
#yourPath\yourFileName.sql
How to pass parameters in file
#A1.sql; (Parameter)
#A2.sql; (Parameter)
This is not absolute or relative path issue. It's the SQL interpreter issue, where by default it will look for files which are having .sql extention.
Please make sure to modify the file name to file_name.sql
Ex: if workspace is having file name called "A", then move the file from A to "A.sql"