I am trying to validate the update or select query in shell scripting.
For example, my query is:
update table_name set col_name = 1 where emp_id = '1234'
If the code will validate the first word must be update and second word must be table_name and third word will be set. I tried to validate, but I am not able to get the things.
I created a very simple python script which works in v2 and v3.
Copy the content to filename.py and perform sudo chmod +x filename.py
#!/usr/bin/env python
string = "update table_name set col_name = 1 where emp_id = '1234'"
x = string.split(' ')
if "update" != x[0]:
print("the first word does not contain update, exiting")
exit
else:
print("the first word contains update")
if "update" != x[1]:
print("the second word does not contain table_name, exiting")
exit
else:
print("the second word contains table_name")
if "update" != x[2]:
print("the third word does not contain set, exiting")
exit
else:
print("the third word contains set")
print("we're good to go")
Related
I have a polygon file in form of a fishnet. Also another feature class with polygons named Trawl_Buffers. There is a unique field within Trawl_Buffers based on YEAR. I'd like to create a script to run a selection on YEAR, and then perform a union analysis with the fishnet polygon for each YEAR. So the desired output would be "Trawl_Buffers_union2003", "Trawl_Buffers_union2004" etc. I have a function that will get me the unique list of the years, and puts them in a list which i called vals.
Then seems I need to run a for loop over this list of unique years, create a temporary selection, then use that as input for the union, but I am having trouble implementing the query process.
Here is where I started, but seriously tripping
import arcpy
#Set the data environment
arcpy.env.overwriteOutput = True
arcpy.env.workspace = r'C:\Data\working\AK_Fishing_VMS\2021_Delivery\ArcPro_proj\ArcPro_proj.gdb'
trawlBuffs = r'C:\Data\working\AK_Fishing_VMS\2021_Delivery\ArcPro_proj\ArcPro_proj.gdb\buffers\buffers_testing'
fishnet = r'C:\Data\working\AK_Fishing_VMS\2021_Delivery\ArcPro_proj\ArcPro_proj.gdb\fishnets\vms_net1k'
unionOut = r'C:\Data\working\AK_Fishing_VMS\2021_Delivery\ArcPro_proj\ArcPro_proj.gdb\unions\union'
# function to get unique values for the YEAR field found within the trawlBuffs fc
def unique_values(table, field):
with arcpy.da.SearchCursor(table, [field]) as cursor:
return sorted({row[0] for row in cursor})
# Get the unique values for the field 'YEAR' found within the 'trawl_buffs' featureclass table
vals = unique_values(trawlBuffs, "YEAR")
# Create a query string for the selected country
yearSelectionClause = '"YEAR" = ' + "'" + vals + "'"
#loop through the years, create selection, union, make permanent
for year in vals:
year_layer = str(year) + "_union"
arcpy.MakeFeatureLayer_management(trawlBuffs, year_layer)
arcpy.SelectLayerByAttribute_management(year_layer, "NEW_SELECTION", "\"YEAR"\" = %d" % (year))
arcpy.Union_analysis(fishnet, year_layer , unionOut)
arcpy.CopyFeatures_management(year_layer, "union_" + str(year))
Whenever I use read.csv.sql I cannot select from the first column with and any output from the code places an unusual character (A(tilde)-..) at the begging of the first column's name.
So suppose I create a df.csv file in in Excel that looks something like this
df = data.frame(
a = 1,
b = 2,
c = 3,
d = 4)
Then if I use sqldf to query the csv which is in my working directory I get the following error:
> read.csv.sql("df.csv", sql = "select * from file where a == 1")
Error in result_create(conn#ptr, statement) : no such column: a
If I query a different column than the first, I get a result but with the output of the unusual characters as seen below
df <- read.csv.sql("df.csv", sql = "select * from file where b == 2")
View(df)
Any idea how to prevent these characters from being added to the first column name?
The problem is presumably that you have a file that is larger than R can handle and so only want to read a subset of rows into R and specifying the condition to filter it by involves referring to the first column whose name is messed up so you can't use it.
Here are two alternative approaches. The first one involves a bit more code but has the advantage that it is 100% R. The second one is only one statement and also uses R but additionally makes use an of an external utility.
1) skip header Read the file in skipping over the header. That will cause the columns to be labelled V1, V2, etc. and use V1 in the condition.
# write out a test file - BOD is a data frame that comes with R
write.csv(BOD, "BOD.csv", row.names = FALSE, quote = FALSE)
# read file skipping over header
DF <- read.csv.sql("BOD.csv", "select * from file where V1 < 3",
skip = 1, header = FALSE)
# read in header, assign it to DF and fix first column
hdr <- read.csv.sql("BOD.csv", "select * from file limit 0")
names(DF) <- names(hdr)
names(DF)[1] <- "TIME" # suppose we want TIME instead of Time
DF
## TIME demand
## 1 1 8.3
## 2 2 10.3
2) filter Another way to proceed is to use the filter= argument. Here we assume we know that the end of the column name is ime but there are other characters prior to that that we don't know. This assumes that sed is available and on your path. If you are on Windows install Rtools to get sed. The quoting might need to be changed depending on your shell.
When trying this on Windows I noticed that sed from Rtools changed the line endings so below we specified eol= to ensure correct processing. You may not need that.
DF <- read.csv.sql("BOD.csv", "select * from file where TIME < 3",
filter = 'sed -e "1s/.*ime,/TIME,/"' , eol = "\n")
DF
## TIME demand
## 1 1 8.3
## 2 2 10.3
So I figured it out by reading through the above comments.
I'm on a Windows 10 machine using Excel for Office 365. The special characters will go away by changing how I saved the file from a "CSV UTF-8 (Comma Delimited)" to just "CSV (Comma delimited)".
I'm using RJDBC in RStudio to pull a set of data from an Oracle database into R.
After loading the RJDBC package I have the following lines:
drv = JDBC("oracle.jdbc.OracleDriver", classPath="C:/R/ojdbc7.jar", identifier.quote = " ")
conn = dbConnect(drv,"jdbc:oracle:thin:#private_server_info", "804301", "password")
rs = dbSendQuery(conn, statement= paste("LONG SQL QUERY TO SELECT REQUIRED DATA INCLUDING REQUEST FOR VARIABLE x"))
masterdata = fetch(rs, n = -1) # extract all rows
Run through the usual script, they always execute without fail; it can sometimes take a few minutes dependent on variable x, e.g. may result in 100K rows or 1M rows being pulled. masterdata will return everything in a dataframe.
I'm now trying to place all of the above into a function, with one required argument, variable x which is a TEXT argument (a city name); this input however is also part of the LONG SQL QUERY.
The function I wrote called Data_Grab is as follows:
Data_Grab = function(x) {
drv = JDBC("oracle.jdbc.OracleDriver", classPath="C:/R/ojdbc7.jar", identifier.quote = " ")
conn = dbConnect(drv,"jdbc:oracle:thin:#private_server_info", "804301", "password")
rs = dbSendQuery(conn, statement= paste("LONG SQL QUERY TO SELECT REQUIRED DATA,
INCLUDING REQUEST FOR VARIABLE x"))
masterdata = fetch(rs, n = -1) # extract all rows
return (masterdata)
}
My function appears to execute in seconds (no error is produced) however I get just the 21 column headings for the dataframe and the line
<0 rows> (or 0-length row.names)
Not sure what is wrong here; obviously expecting function to still take minutes to execute as data being pulled is large, but not being returned any actual data frame.
Help is appreciated!
if you want to parameterize your query to a JDBC database, try also using the gsubfn package. code might look like this:
library(gsubfn)
library(RJDBC)
Data_Grab = function(x) {
rd1 = x
df <- fn$dbGetQuery(conn,"SELECT BLAH1, BLAH2
FROM TABLENAME
WHERE BLAH1 = '$rd1')
return(df)
basically, you need to put a $ before the variable name that stores the parameter you wish to pass.
I've made a reverse function, it reverses the sentence, however it generates index error.
what the program does is append the last word from s and puts it into rev[],
it then deletes the word s[-1].
s = "This is awesome"
def Reverse1(s):
s = s.split(" ") #reverses the word instead of letters
rev = []
while True:
rev.append (s[-1])
del s[-1]
print (rev)
return
reverse1(s)
its returning index error as it tries to continue when s is empty
so I think its the while loop statement.
any ideas?
You need to stop the while loop, you can use something like this
while n in range(len(s)):
I'm trying to make some changes to my dictionary counter in python. I want make some changes to my current counter, but not making any progress so far. I want my code to show the number of different words.
This is what I have so far:
# import sys module in order to access command line arguments later
import sys
# create an empty dictionary
dicWordCount = {}
# read all words from the file and put them into
#'dicWordCount' one by one,
# then count the occurance of each word
you can use the Count function from collections lib:
from collections import Counter
q = Counter(fileSource.read().split())
total = sum(q.values())
First, your first problem, add a variable for the word count and one for the different words. So wordCount = 0 and differentWords = 0. In the loop for your file reading put wordCount += 1 at the top, and in your first if statement put differentWords += 1. You can print these variables out at the end of the program as well.
The second problem, in your printing, add the if statement, if len(strKey)>4:.
If you want a full example code here it is.
import sys
fileSource = open(sys.argv[1], "rt")
dicWordCount = {}
wordCount = 0
differentWords = 0
for strWord in fileSource.read().split():
wordCount += 1
if strWord not in dicWordCount:
dicWordCount[strWord] = 1
differentWords += 1
else:
dicWordCount[strWord] += 1
for strKey in sorted(dicWordCount, key=dicWordCount.get, reverse=True):
if len(strKey) > 4: # if the words length is greater than four.
print(strKey, dicWordCount[strKey])
print("Total words: %s\nDifferent Words: %s" % (wordCount, differentWords))
For your first qs, you can use set to help you count the number of different words. (Assume there is a space between every two words)
str = 'apple boy cat dog elephant fox'
different_word_count = len(set(str.split(' ')))
For your second qs, using a dictionary to help you record the word_count is ok.
How about this?
#gives unique words count
unique_words = len(dicWordCount)
total_words = 0
for k, v in dicWordCount.items():
total_words += v
#gives total word count
print(total_words)
You don't need a separate variable for counting word counts since you're using dictionary, and to count the total words, you just need to add the values of the keys(which are just counts)