Python for ArcGIS: add a text field to a shape file, then search one field and if meets a condition, update another field - arcgis

I'm learning Python (2.7) for ArcGIS (10.8) on my own. I have this problem:
Write a script that adds a text field to the roads.shp feature class called
FERRY and populates this field with YES and NO values, depending on
the value of the FEATURE field.
After trying to figure it out and checking over the internet I got this script:
from arcpy
from arcpy import env
env.workspace = "C/esriPress/Python/Data/Exercise07/Results" # Set workspace
env.overwriteOutput = True # overwriting outputs
fc = "roads.shp" # input feature class
newfield = "FERRY" # new field to be created
fieldtype = "TEXT" # type of the field that is going to be created
fieldname = arcpy.ValidateFieldName(newfield) # to determine whether a specific name is valid or not
field_length = 3 # Lenght of th new field to be created
fieldlist = ListFields(fc) # list the fields in 'fc', it returns a reference number of each object, not the object name.
fieldnames = [] # Create an empty list
# Check whether 'newfield' already exist or not; if not, create it
for field in fieldlist: # For each element in 'fieldlist'...
fieldnames.append(field.name) #...add in the empty list 'fieldnames' the field name
if fieldname not in fieldnames: # if 'fieldname' don't exists in 'fiednames'...
arcpy.AddField_management(fc, fieldname, fieldtype, field_length) # ...Create 'fieldname' in 'fc' with length = 3
print ("New field has been added")
else:
print ("Field name already exists.")
​
# Search in the one field and update other field
fields = ['FEATURE', newfield] # Create a list that contains the field we apply condition, and the field we going to update
with arcpy.da.UpdateCursor(fc, fields) as cursor: # Setting the cursor; notice the cursor is made up of two fields
for row in cursor: # For each row in cursor...
if row[0] == "Ferry Crossing": #...if row in field 1 = "Ferry Crossing'...
row[1] = "YES" #... Set that row in field 2 to 'Yes'
else: # Otherwise...
row[1] = "NO" # ....Set that row in field 2 to 'No'...
cursor.updateRow(row) # Update cursor
print ("New field has been added" % newfield)
else:
row[1] = "NO"
cursor.updateRow(row)
print ("Field name already exists" % newfield)
However, when I run the script in PythonWin 2.7.16, I get this error message:
Failed to run script - syntax error - invalid syntax
What is the problem with my script? Is it the right approach?

It appears you have 2 else blocks and an extra cursor.updateRow(row).
As a result, your print statement is not indented properly. Try running the following Update Cursor:
import arcpy
fc = '/path/to/your/geodatabase.gdb/feature_class'
# Add field
arcpy.AddField(fc, "FERRY", field_type = "TEXT")
# Update attributes based on a condition
# Note row[0] is "FEATURE" and row[1] is "FERRY"
with arcpy.da.UpdateCursor(fc, ["FEATURE", "FERRY"]) as cursor:
for row in cursor:
if row[0] == "ADD YOUR CONDITION HERE":
row[1] = "YES"
else:
row[1] = "NO"
cursor.updateRow(row)

Related

Change number format in Excel using names of headers - openpyxl [duplicate]

I have an Excel (.xlsx) file that I'm trying to parse, row by row. I have a header (first row) that has a bunch of column titles like School, First Name, Last Name, Email, etc.
When I loop through each row, I want to be able to say something like:
row['School']
and get back the value of the cell in the current row and the column with 'School' as its title.
I've looked through the OpenPyXL docs but can't seem to find anything terribly helpful.
Any suggestions?
I'm not incredibly familiar with OpenPyXL, but as far as I can tell it doesn't have any kind of dict reader/iterator helper. However, it's fairly easy to iterate over the worksheet rows, as well as to create a dict from two lists of values.
def iter_worksheet(worksheet):
# It's necessary to get a reference to the generator, as
# `worksheet.rows` returns a new iterator on each access.
rows = worksheet.rows
# Get the header values as keys and move the iterator to the next item
keys = [c.value for c in next(rows)]
for row in rows:
values = [c.value for c in row]
yield dict(zip(keys, values))
Excel sheets are far more flexible than CSV files so it makes little sense to have something like DictReader.
Just create an auxiliary dictionary from the relevant column titles.
If you have columns like "School", "First Name", "Last Name", "EMail" you can create the dictionary like this.
keys = dict((value, idx) for (idx, value) in enumerate(values))
for row in ws.rows[1:]:
school = row[keys['School'].value
I wrote DictReader based on openpyxl. Save the second listing to file 'excel.py' and use it as csv.DictReader. See usage example in the first listing.
with open('example01.xlsx', 'rb') as source_data:
from excel import DictReader
for row in DictReader(source_data, sheet_index=0):
print(row)
excel.py:
__all__ = ['DictReader']
from openpyxl import load_workbook
from openpyxl.cell import Cell
Cell.__init__.__defaults__ = (None, None, '', None) # Change the default value for the Cell from None to `` the same way as in csv.DictReader
class DictReader(object):
def __init__(self, f, sheet_index,
fieldnames=None, restkey=None, restval=None):
self._fieldnames = fieldnames # list of keys for the dict
self.restkey = restkey # key to catch long rows
self.restval = restval # default value for short rows
self.reader = load_workbook(f, data_only=True).worksheets[sheet_index].iter_rows(values_only=True)
self.line_num = 0
def __iter__(self):
return self
#property
def fieldnames(self):
if self._fieldnames is None:
try:
self._fieldnames = next(self.reader)
self.line_num += 1
except StopIteration:
pass
return self._fieldnames
#fieldnames.setter
def fieldnames(self, value):
self._fieldnames = value
def __next__(self):
if self.line_num == 0:
# Used only for its side effect.
self.fieldnames
row = next(self.reader)
self.line_num += 1
# unlike the basic reader, we prefer not to return blanks,
# because we will typically wind up with a dict full of None
# values
while row == ():
row = next(self.reader)
d = dict(zip(self.fieldnames, row))
lf = len(self.fieldnames)
lr = len(row)
if lf < lr:
d[self.restkey] = row[lf:]
elif lf > lr:
for key in self.fieldnames[lr:]:
d[key] = self.restval
return d
The following seems to work for me.
header = True
headings = []
for row in ws.rows:
if header:
for cell in row:
headings.append(cell.value)
header = False
continue
rowData = dict(zip(headings, row))
wantedValue = rowData['myHeading'].value
I was running into the same issue as described above. Therefore I created a simple extension called openpyxl-dictreader that can be installed through pip. It is very similar to the suggestion made by #viktor earlier in this thread.
The package is largely based on source code of Python's native csv.DictReader class. It allows you to select items based on column names using openpyxl. For example:
import openpyxl_dictreader
reader = openpyxl_dictreader.DictReader("names.xlsx", "Sheet1")
for row in reader:
print(row["First Name"], row["Last Name"])
Putting this here for reference.

How to check & change datatype of a column of panda dataframe using combobox in jupyter notebook?

I would like to convert the data type of a column but using ipython wigets:
Combobox1 will display all columns of the data frame.
Disabled Text1 box will show the existing datatype of the column selected at step 1.
Next Combobox2 will give options like, {'int', 'float', 'datetime', 'object'}
Text2 box will show something like:
df['customerID'].dtypes = String
Output in Text2 must be stored in a variable for future programming.
Button to execute data type conversion.
Code that I referred(but not been able to customize completely):
# Common Imports for Widgets
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
df = pd.read_csv('TelcomCustomer-Churn.csv', index_col = 0)
foo = df.columns
# Function to apply to drop box object
def bar(x):
print(x,'\n') # Whole object
print(x.new,'\n') # New value
# creation of a widget dropdown object
box1 = widgets.Dropdown(
options=foo, # Object to iterate over
description='Column Name:',
value=foo[1], # Default value selection
rows=len(foo), # The number of rows to display when showing the box
interactive=True,
);
#text box
text = widgets.Text(
value='last',
placeholder='datatype',
description='String:',
disabled=True
)
def callback(wdgt):
with output:
clear_output
display(wdgt.value)
text.on_submit(callback)
# Drop box of k,v like pairs
box2 = widgets.Dropdown(
options=[('Object', 1), ('float', 2), ('int', 3), ('datetime', 4)],
value=1,
description='datatypes:',
)
# Button to click
select_button = widgets.Button(
description='Convert Datatype',
# missing action to be perfomed
disabled=False
)
# Event Handlers
box1.observe(bar,names='value')
text.observe(bar,names='value')
box2.observe(bar,names='value')
select_button.on_click(get_user_selection)
[enter image description here][1]
ui = widgets.HBox([box1,text,box2,select_button])
# display the UI
display(ui)
Code Output

How can I use a loop to apply a function to a list of csv files?

I'm trying to loop through all files in a directory and add "indicator" data to them. I had the code working where I could select 1 file and do this, but now am trying to make it work on all files. The problem is when I make the loop it says
ValueError: Invalid file path or buffer object type: <class 'list'>
The goal would be for each loop to read another file from list, make changes, and save file back to folder with changes.
Here is complete code w/o imports. I copied 1 of the "file_path"s from the list and put in comment at bottom.
### open dialog to select file
#file_path = filedialog.askopenfilename()
###create list from dir
listdrs = os.listdir('c:/Users/17409/AppData/Local/Programs/Python/Python38/Indicators/Sentdex Tutorial/stock_dfs/')
###append full path to list
string = 'c:/Users/17409/AppData/Local/Programs/Python/Python38/Indicators/Sentdex Tutorial/stock_dfs/'
listdrs_path = [ string + x for x in listdrs]
print (listdrs_path)
###start loop, for each "file" in listdrs run the 2 functions below and overwrite saved csv.
for file in listdrs_path:
file_path = listdrs_path
data = pd.read_csv(file_path, index_col=0)
########################################
####function 1
def get_price_hist(ticker):
# Put stock price data in dataframe
data = pd.read_csv(file_path)
#listdr = os.listdir('Users\17409\AppData\Local\Programs\Python\Python38\Indicators\Sentdex Tutorial\stock_dfs')
print(listdr)
# Convert date to timestamp and make index
data.index = data["Date"].apply(lambda x: pd.Timestamp(x))
data.drop("Date", axis=1, inplace=True)
return data
df = data
##print(data)
######Indicator data#####################
def get_indicators(data):
# Get MACD
data["macd"], data["macd_signal"], data["macd_hist"] = talib.MACD(data['Close'])
# Get MA10 and MA30
data["ma10"] = talib.MA(data["Close"], timeperiod=10)
data["ma30"] = talib.MA(data["Close"], timeperiod=30)
# Get RSI
data["rsi"] = talib.RSI(data["Close"])
return data
#####end functions#######
data2 = get_indicators(data)
print(data2)
data2.to_csv(file_path)
###################################################
#here is an example of what path from list looks like
#'c:/Users/17409/AppData/Local/Programs/Python/Python38/Indicators/Sentdex Tutorial/stock_dfs/A.csv'
The problem is in line number 13 and 14. Your filename is in variable file but you are using file_path which you've assigned the file list. Because of this you are getting ValueError. Try this:
### open dialog to select file
#file_path = filedialog.askopenfilename()
###create list from dir
listdrs = os.listdir('c:/Users/17409/AppData/Local/Programs/Python/Python38/Indicators/Sentdex Tutorial/stock_dfs/')
###append full path to list
string = 'c:/Users/17409/AppData/Local/Programs/Python/Python38/Indicators/Sentdex Tutorial/stock_dfs/'
listdrs_path = [ string + x for x in listdrs]
print (listdrs_path)
###start loop, for each "file" in listdrs run the 2 functions below and overwrite saved csv.
for file_path in listdrs_path:
data = pd.read_csv(file_path, index_col=0)
########################################
####function 1
def get_price_hist(ticker):
# Put stock price data in dataframe
data = pd.read_csv(file_path)
#listdr = os.listdir('Users\17409\AppData\Local\Programs\Python\Python38\Indicators\Sentdex Tutorial\stock_dfs')
print(listdr)
# Convert date to timestamp and make index
data.index = data["Date"].apply(lambda x: pd.Timestamp(x))
data.drop("Date", axis=1, inplace=True)
return data
df = data
##print(data)
######Indicator data#####################
def get_indicators(data):
# Get MACD
data["macd"], data["macd_signal"], data["macd_hist"] = talib.MACD(data['Close'])
# Get MA10 and MA30
data["ma10"] = talib.MA(data["Close"], timeperiod=10)
data["ma30"] = talib.MA(data["Close"], timeperiod=30)
# Get RSI
data["rsi"] = talib.RSI(data["Close"])
return data
#####end functions#######
data2 = get_indicators(data)
print(data2)
data2.to_csv(file_path)
Let me know if it helps.

Break document sections into list for export Python

I am very new to Python, and I am trying to break some legal documents into sections for export into SQL. I need to do two things:
Define the section numbers by the table of contents, and
Break up the document given the defined section numbers
The table of contents lists section numbers: 1.1, 1.2, 1.3, etc.
Then the document itself is broken up by those section numbers:
1.1 "...Text...",
1.2 "...Text...",
1.3 "...Text...", etc.
Similar to the chapters of a book, but delimited by ascending decimal numbers.
I have the document parsed using Tika, and I've been able to create a list of sections with some basic regex:
import tika
import re
from tika import parser
parsed = parser.from_file('test.pdf')
content = (parsed["content"])
headers = re.findall("[0-9]*[.][0-9]",content)
Now I need to do something like this:
splitsections = content.split() by headers
var_string = ', '.join('?' * len(splitsections))
query_string = 'INSERT INTO table VALUES (%s);' % var_string
cursor.execute(query_string, splitsections)
Sorry if all this is unclear. Still very new to this.
Any help you can provide would be most appreciated.
Everything tested except the last part with DB. Also the code can be improved, but this is another task. The main task is done.
In the list split_content there are all pieces of info you wanted (i.e. the text between 2.1 and 2.2, then 2.2 and 2.3, and so on, EXCLUDING num+name of sections itself (i.e. excluding 2.1 Continuation, 2.2 Name and so on).
I replaced tika by PyPDF2, as tika does not provide instruments needed for this task (i.e. I did not find how to provide the num of page I need and get its content).
def get_pdf_content(pdf_path,
start_page_table_contents, end_page_table_contents,
first_parsing_page, last_phrase_to_stop):
"""
:param pdf_path: Full path to the PDF file
:param start_page_table_contents: The page where the "Contents table" starts
:param end_page_table_contents: The page where the "Contents Table" ends
(i.e. the number of the page where Contents Table ENDs, i.e. not the next one)
:param first_parsing_page: The 1st page where we need to start data grabbing
:param last_phrase_to_stop: The phrase that tells the code where to stop grabbing.
The phrase must match exactly what is written in PDF.
This phrase will be excluded from the grabbed data.
:return:
"""
# ======== GRAB TABLE OF CONTENTS ========
start_page = start_page_table_contents
end_page = end_page_table_contents
table_of_contents_page_nums = range(start_page-1, end_page)
sections_of_articles = [] # ['2.1 Continuation', '2.2 Name', ... ]
open_file = open(pdf_path, "rb")
pdf = PyPDF2.PdfFileReader(open_file)
for page_num in table_of_contents_page_nums:
page_content = pdf.getPage(page_num).extractText()
page_sections = re.findall("[\d]+[.][\d][™\s\w;,-]+", page_content)
for section in page_sections:
cleared_section = section.replace('\n', '').strip()
sections_of_articles.append(cleared_section)
# ======== GRAB ALL NECESSARY CONTENT (MERGE ALL PAGES) ========
total_num_pages = pdf.getNumPages()
parsing_pages = range(first_parsing_page-1, total_num_pages)
full_parsing_content = '' # Merged pages
for parsing_page in parsing_pages:
page_content = pdf.getPage(parsing_page).extractText()
cleared_page = page_content.replace('\n', '')
# Remove page num from the start of "page_content"
# Covers the case with the page 65, 71 and others when the "page_content" starts
# with, for example, "616.6 Liability to Partners. (a) It is understood that"
# i.e. "61" is the page num and "6.6 Liability ..." is the section data
already_cleared = False
first_50_chars = cleared_page[:51]
for section in sections_of_articles:
if section in first_50_chars:
indx = cleared_page.index(section)
cleared_page = cleared_page[indx:]
already_cleared = True
break
# Covers all other cases
if not already_cleared:
page_num_to_remove = re.match(r'^\d+', cleared_page)
if page_num_to_remove:
cleared_page = cleared_page[len(str(page_num_to_remove.group(0))):]
full_parsing_content += cleared_page
# ======== BREAK ALL CONTENT INTO PIECES ACCORDING TO TABLE CONTENTS ========
split_content = []
num_sections = len(sections_of_articles)
for num_section in range(num_sections):
start = sections_of_articles[num_section]
# Get the last piece, i.e. "11.16 FATCA" (as there is no any "end" section after "11.16 FATCA", so we cant use
# the logic like "grab info between sections 11.1 and 11.2, 11.2 and 11.3 and so on")
if num_section == num_sections-1:
end = last_phrase_to_stop
else:
end = sections_of_articles[num_section + 1]
content = re.search('%s(.*)%s' % (start, end), full_parsing_content).group(1)
cleared_piece = content.replace('™', "'").strip()
if cleared_piece[0:3] == '. ':
cleared_piece = cleared_piece[3:]
# There are few appearances of "[Signature Page Follows]", as a "last_phrase_to_stop".
# We need the text between "11.16 FATCA" and the 1st appearance of "[Signature Page Follows]"
try:
indx = cleared_piece.index(end)
cleared_piece = cleared_piece[:indx]
except ValueError:
pass
split_content.append(cleared_piece)
# ======== INSERT TO DB ========
# Did not test this section
for piece in split_content:
var_string = ', '.join('?' * len(piece))
query_string = 'INSERT INTO table VALUES (%s);' % var_string
cursor.execute(query_string, parts)
How to use: (one of the possible way):
1) Save the code above in my_pdf_code.py
2) In the python shell:
import path.to.my_pdf_code as the_code
the_code.get_pdf_content('/home/username/Apollo_Investment_Fund_VIII_LPA_S1.pdf', 2, 4, 24, '[Signature Page Follows]')

rxDataStep using lagged values

In SAS its possible to go through a dataset and used lagged values.
The way I would do it is to use a function that does a "lag", but this presumably would produce a wrong value at the beginning of a chunk. For example if a chunk starts at row 200,000, then it will assume an NA for a lagged value that should come instead from row 199,999.
Is there a solution for this?
Here's another approach for lagging: self-merging using a shifted date. This is dramatically simpler to code and can lag several variables at once. The downsides are that it takes 2-3 times longer to run than my answer using transformFunc, and requires a second copy of the dataset.
# Get a sample dataset
sourcePath <- file.path(rxGetOption("sampleDataDir"), "DJIAdaily.xdf")
# Set up paths for two copies of it
xdfPath <- tempfile(fileext = ".xdf")
xdfPathShifted <- tempfile(fileext = ".xdf")
# Convert "Date" to be Date-classed
rxDataStep(inData = sourcePath,
outFile = xdfPath,
transforms = list(Date = as.Date(Date)),
overwrite = TRUE
)
# Then make the second copy, but shift all the dates up
# one (or however much you want to lag)
# Use varsToKeep to subset to just the date and
# the variables you want to lag
rxDataStep(inData = xdfPath,
outFile = xdfPathShifted,
varsToKeep = c("Date", "Open", "Close"),
transforms = list(Date = as.Date(Date) + 1),
overwrite = TRUE
)
# Create an output XDF (or just overwrite xdfPath)
xdfLagged2 <- tempfile(fileext = ".xdf")
# Use that incremented date to merge variables back on.
# duplicateVarExt will automatically tag variables from the
# second dataset as "Lagged".
# Note that there's no need to sort manually in this one -
# rxMerge does it automatically.
rxMerge(inData1 = xdfPath,
inData2 = xdfPathShifted,
outFile = xdfLagged2,
matchVars = "Date",
type = "left",
duplicateVarExt = c("", "Lagged")
)
You're exactly right about the chunking problem. The workaround is to use rxGet and rxSet to pass values between chunks. Here's the function:
lagVar <- function(dataList) {
# .rxStartRow returns the overall row number of the first row in this
# chunk. So - the first row of the first chunk is equal to one.
# If this is the very first row, there's no previous value to use - so
# it's just an NA.
if(.rxStartRow == 1) {
# Put the NA out front, then shift all the other values down one row.
# newName is the desired name of the lagged variable, set using
# transformObjects - see below
dataList[[newName]] <- c(NA, dataList[[varToLag]][-.rxNumRows])
} else {
# If this isn't the very first chunk, we have to fetch the previous
# value from the previous chunk using .rxGet, then shift all other
# values down one row, just as before.
dataList[[newName]] <- c(.rxGet("lastValue"),
dataList[[varToLag]][-.rxNumRows])
}
# Finally, once this chunk is done processing, set its lastValue so that
# the next chunk can use it.
.rxSet("lastValue", dataList[[varToLag]][.rxNumRows])
# Return dataList with the new variable
dataList
}
and how to use it in rxDataStep:
# Get a sample dataset
xdfPath <- file.path(rxGetOption("sampleDataDir"), "DJIAdaily.xdf")
# Set a path to a temporary file
xdfLagged <- tempfile(fileext = ".xdf")
# Sort the dataset chronologically - otherwise, the lagging will be random.
rxSort(inData = xdfPath,
outFile = xdfLagged,
sortByVars = "Date")
# Finally, put the lagging function to use:
rxDataStep(inData = xdfLagged,
outFile = xdfLagged,
transformObjects = list(
varToLag = "Open",
newName = "previousOpen"),
transformFunc = lagVar,
append = "cols",
overwrite = TRUE)
# Check the results
rxDataStep(xdfLagged,
varsToKeep = c("Date", "Open", "previousOpen"),
numRows = 10)