When writing a script that loads data, it's a waste of time to wait for it to load each time.
How to check to see if the variable is defined?
You can use the exist function in Octave to do the work. It can be used to check the existence of given name as a variable, built in function, file, or directory. In you case, to check the existence of a variable, you may use something like this:
if (exist("your_var_name", "var") == 1)
printf("varname exists");
else
printf("varname not exists");
endif
You may refer the following links for detailed information:
Built-in Function: exist (name, type)
Status of Variables
Need to put the variable name in quotes too,
exist("varname", "var")
if (exist("itemcount") == 1)
% here it checks if itemcount is a variable, by changing the value after ==, you can check for function name, file name, dir, path etc.
end
Note itemcount is in double quotes.
By changing the value after ==, you can check for function name, file name, dir, path etc.
from / more info at:
https://www.gnu.org/software/octave/doc/interpreter/Status-of-Variables.html#XREFexist
other return values ..
2 if the name is an absolute file name, an ordinary file in Octave’s path, or (after appending ‘.m’) a function file in Octave’s path, 3 if the name is a ‘.oct’ or ‘.mex’ file in Octave’s path, 5 if the name is a built-in function, 7 if the name is a directory, or 103 if the name is a function not associated with a file (entered on the command line). Otherwise, return 0.
Related
I just started learning Python and now I'm trying to integrate that with my GIS knowledge. As the title suggests, I'm attempting to convert an Excel sheet to a table but I keep getting errors, one which is wholly undecipherable to me and the other which seems to be suggesting that my file does not exist which, I know is incorrect since I copied it's location directly from it's properties.
Here is a screenshot of my environment. Please help if you can and thanks in advance.
Environment/Error
Simply set, you put the workspace directory inside the filename variable so when arcpy handles it, it tries to acess a file that does not exist, in an unknown workspace.
Try this.
arcpy.env.workspace = "J:\egis_work\dpcd\projects\SHARITA\Python\"
arcpy.ExcelToTable_conversion("Exceltest.xlsx", "Bookstorestable", "Sheet1")
Arcpy uses the following syntax to convert geodatabase tables to excel
It is straight forward.
Example
Excel tables cannot be stored in the geodatabase. Most reasonable thing is to store them in the rootfolder in which the geodatabase with the table is. Say I want to convert table below into excel and save it in the root folder or in the folder in which the geodatabase is.
I will go as follows: I have put the explanations after the #.
import arcpy
import os
from datetime import datetime, date, time
# Set environment settings
in_table= r"C:\working\Sunderwood\Network Analyst\MarchDistances\Centroid.gdb\SunderwoodFirstArcpyTable"
#os.path.basename(in_table)
out_xls= os.path.basename(in_table)+ datetime.now().strftime('%Y%m%d') # Here
#os.path.basename(in_table)- Gives the base name of pathname. In this case, it returns the name table
# + is used in python to concatenate
# datetime.now()- gives todays date
# Converts todays date into a string in the format YYYMMDD
# Please add all the above statements and you notice you have a new file name which is the table you input plus todays date
#os.path.dirname() method in Python is used to get the directory name from the specified path
geodatabase = os.path.dirname(in_table)
# In this case, os.path.dirname(in_table) gives us the geodatabase
# The The join() method takes all items in an iterable and joins them into one string
SaveInFolder= "\\".join(geodatabase.split('\\')[:-1])
# This case, I tell python take \ and join on the primary directory above which I have called geodatabase. However, I tell it to remove some characters. I will explain the split below.
# I use split method. The split() method splits a string into a list
#In the case above it splits into ['W:\\working\\Sunderwood\\Network', 'Analyst\\MarchDistances\\Centroid.gdb']. However, that is not what I want. I want to remove "\\Centroid.gdb" so that I remain with the follwoing path ['W:\\working\\Sunderwood\\Network', 'Analyst\\MarchDistances']
#Before I tell arcpy to save, I have to specify the workspace in which it will save. So I now make my environment the SaveInFolder
arcpy.env.workspace =SaveInFolder
## Now I have to tell arcpy what I will call my newtable. I use os.path.join.This method concatenates various path components with exactly one directory separator (‘/’) following each non-empty part except the last path component
newtable = os.path.join(arcpy.env.workspace, out_xls)
#In the above case it will give me "W:\working\Sunderwood\Network Analyst\MarchDistances\SunderwoodFirstArcpyTable20200402"
# You notice the newtable does not have an excel extension. I resort to + to concatenate .xls onto my path and make it "W:\working\Sunderwood\Network Analyst\MarchDistances\SunderwoodFirstArcpyTable20200402.xls"
table= newtable+".xls"
#Finally, I call the arcpy method and feed it with the required variables
# Execute TableToExcel
arcpy.TableToExcel_conversion(in_table, table)
print (table + " " + " is now available")
I've been developing an SSIS package from a flat file source. The file comes daily and the file name has datetime indication like this:
Filename_20190509042908.txt
I was wondering how I can pass until the date part; I want the package to read the file dynamically, but it should pass without the last 6 digits I just don't need the last 6 digit numbers as it is not consistent.
I want to pass Filename_20190509.txt
I have figured out how to take the filename until date removing the time part. Hence, I've trouble to let the package read the file name dynamically by ignoring the last 6 digits before file extension.
Can anyone help me with this please?
Remove the time part from the full file path
Assuming that the full file path is stored within a variable named #[User::FilePath]
You have to add a variable of type string (example: #[User::Filename]), Before the data flow task add an Expression Task and use the following expression:
#[User::Filename] = SUBSTRING(#[User::FilePath], 1, LEN(#[User::FilePath]) -
FINDSTRING(REVERSE(#[User::FilePath]), "\\", 1)) + "\\" +
LEFT(TOKEN(#[User::FilePath],"\\",TOKENCOUNT(#[User::FilePath],"\\")),
LEN(TOKEN(#[User::FilePath],"\\",TOKENCOUNT(#[User::FilePath],"\\"))) - 10) + ".txt"
Example:
If the value of #[User::FilePath] is
C:\New Folder\1\Filename_20190503001221.txt
Then #[User::Filename] will be:
C:\New Folder\1\Filename_20190503.txt
If You have only the file name as
filename_20190503001221.txt
and the folder path is stored in another variable, just use the following expression:
#[User::Filename] = #[User::Folderpath] + "\\" +
LEFT(TOKEN(#[User::FilePath],"\\",TOKENCOUNT(#[User::FilePath],"\\")),
LEN(TOKEN(#[User::FilePath],"\\",TOKENCOUNT(#[User::FilePath],"\\"))) - 10) + ".txt"
Read File Source from Variable
Click on the flat file connection manager used to read the source file, press F4 to show the properties tab, Click on the expression property and assign the following expression to the connectionstring property:
#[User::Filename]
Now change the Data Flow Task Delay validation property value to True.
Dynamic Flat File Connections in SQL Server Integration Services
I have to assume you are using a foreach loop already as the filename is changing but here is how to change the fully qualified name to what you want:
TOKEN(character_expression, delimiter_string, occurrence)
Your usage:
this will get you the full filename:
exp = TOKEN(#filename,"\",LEN(#filename)-LEN(replace(#filename,"\",""))
then from there you need use left plus add .txt
left(exp,LEN(exp)-10) + ".txt"
How can I find out (with Windows a batch command), if, for example, a variable starts with ABC?
I know that I can search for variables if I know the whole content (if "%variable%"=="abc"), but I want that it only looks after the beginning.
I also need it to find out where the batch file is located, so if there is a other command that reveals the file's location, please let me know.
Use the variable substring syntax:
IF "%variable:~0,3%"=="ABC" [...]
If you need the path to the batch file without the batch file name, you can use the variable:
%~dp0
Syntax for this is explained in the help for the for command, although this variable syntax extends beyond just the for command syntax.
to find batch file location use %0 (gives full patch to current batch file) or %CD% variable which gives local directory
I thought for sure there would be an SO question on this, but I haven't been able to find one.
I have 2 SQL files, myFile1.sql and myFile2.sql. myFile1.sql calls myFile2.sql like so:
-- In myFile1.sql:
#scripts/myFile2
This works with no problem, but now I'd like to pass an argument to the file. I've tried doing the following, with no success (results in a File Not Found exception):
#scripts/myFile2 'ImAnArgument'
Does anyone know what the syntax would be to do this?
I'm guessing your problem is that scripts/myFile2.sql is a relative path from the script it is located in. If that is so, then it is following that path from the directory where SQL*Plus was started (the current working directory). If this is the problem, then it's not the parameter that is the issue, but rather that SQL*Plus can't find the file. In this case, you should use ##, which invokes the path relative to the file it's located in.
The parameter should work just as you proposed (documentation). Parameters provided when invoking a file are placed into substitution variables (rather than bind variables) and can be referenced by using an ampersand followed by the argument number. In your example, 'ImAnArgument' would be &1.
After many attempts, I wasn't able to pass a parameter in (and I still don't understand why not). But here is what I did to get the same affect:
-- In myFile1.sql:
DEFINE my_arg = 'ImAnArgument';
#scripts/myFile2
Then
-- In myFile2.sql
-- Do stuff using the variable my_arg, such as
SELECT my_arg FROM my_table;
I ran into this problem when uploading a file with a super long name - my database field was only set to 50 characters. Since then, I have increased my database field length, but I'd like to have a way to check the length of the filename before uploading. Below is my code. The validation returns '85' as the character length. And it returns the same count for every different file I upload (none of which have a file name length of 85).
<cfscript>
missing_info = "<p>There was a slight problem with your submission. The following are required or invalid:</p><ul>";
// Check the length of the file name for our database field
if ( len(Form["ResumeFile1"]) gt 100 )
{
missing_info = missing_info & "<li>'Resume File 1' is invalid. Character length must be less than 100. Current count is " & len(Form["ResumeFile1"]) & ".</li>";
validation_error = true;
ResumeFileInvalidMarker = true;
}
</cfscript>
Anyone see anything wrong with this?
Thanks!
http://www.cfquickdocs.com/cf9/#cffile.upload
After you upload the file, the variable "clientFileName" will give you the name of the uploaded file, without a file extension.
The only way to read the filename before you upload it would be to use JavaScript to read and parse the value (file path) in the file field.
A quick clarification in the wording of your question. By the time your code executes the file upload has already happened. The file resides in a temporary directory on the ColdFusion server and the form field related to the file upload contains the temporary filename for that file. Aside from checking to see if a file has been specified, do not do anything directly with that file or you'll be circumventing some built in security.
You want to use the cffile tag with the upload action (or equivalent udf) to move the temp file into a folder of your choosing. At that point you get access to a structure containing lots of information. Usually I "upload" into a temporary directory for the application, which should be outside of the webroot for security.
At this point you'll then want to do any validation against the file, such as filename length, file type, file size, etc and delete the file if it fails any checks. If it passes all checks then you move it into it's final destination which may be inside the webroot.
In your case you'll want to check the cffile structure element clientFile which is the original filename including extension (which you'll need to check, since an extension doesn't need to be present and can be any length).