I want to take data from one set and enter it into another empty set.
So, for example, I want to do something like:
if ([i,x] > 9){
new_data$House[y,x] <- data[i,2]
}
but I want to do it over and over, creating new rows in new_data.
How do I keep adding data to new_data and overriding/saving the new row?
Essentially, I just want to know how to "grow" an empty data set.
Please ignore any errors in the code, it is just an example and I am still working on other details.
Thanks
If you are using r language, I presume you are looking for rbind:
new_data = NULL # define your new dataset
for(i in 1:nrow(data)) # loop over row of data
{
if(data[i,x] > 9) # if statement for implementing a condition
{
new_data = rbind(new_data,data[i,2:6]) # adding values of the row i and column 2 to 6
}
}
At the end, new_data will contain as many rows that satisfy the if statement and each row will contain values extracted from column 2 to 6.
If it is what you are looking for, there is various ways to do that without the need of a for loop, as an example:
new_data = data[data[i,x]>9,2:6]
If this answer is not satisfying for you, please provide more details in your question, include a reproducible example of your data and the expected output
Related
In one of my projects, I need to check if an input dataframe is empty or not. If it is not empty, I need to do a bunch of operations and load some results into a table and overwrite the old data there.
On the other hand, if the input dataframe is empty, I do nothing and simply need to truncate the old data in the table. I know how to insert data in with overwrite but don't know how to truncate table only. I searched existing questions/answers and no clear answer found.
driver = 'com.microsoft.sqlserver.jdbc.SQLServerDriver'
stage_url = 'jdbc:sqlserver://server_name\DEV:51433;databaseName=project_stage;user=xxxxx;password=xxxxxxx'
if input_df.count()>0:
# Do something here to generate result_df
print(" write to table ")
write_dbtable = 'Project_Stage.StageBase.result_table'
write_df = result_df
write_df.write.format('jdbc').option('url', stage_url).option('dbtable', write_dbtable). \
option('truncate', 'true').mode('overwrite').option('driver',driver).save()
else:
print('no account to process!')
query = """TRUNCATE TABLE Project_Stage.StageBase.result_table"""
### Not sure how to run the query
Truncating is probably easiest done like this:
write_df = write_df.limit(0)
Also, for better performance, instead of input_df.count() > 0 you should use
Spark 3.2 and below: len(input_df.head(1)) > 0
Spark 3.3+: ~df.isEmpty()
I'm just trying to get the data from this table:
https://www.listcorp.com/asx/sectors/materials
and put all the values (the TEXT) into a list of lists.
I've tried so many different methods using--> xpath, getByClassName, By.tag
------------
rws = driver.find_elements_by_xpath("//table/tbody/tr/td")
---------------
table = driver.find_element_by_class_name("v-datatable v-table theme--light")
--------------
findElements(By.tagName("table"))
--------------
# to identify the table rows
l = driver.find_elements_by_xpath ("//*[#class= 'v-datatable.v-
table.theme--light']/tbody/tr")
# to get the row count len method
print (len(l))
# THIS RETURNS '1' which cant be right because theres hundreds of rows
And nothing seems to work to get the values in an easy way to understand the manner.
(EDIT SOLVED)
before doing the SOLVED solution below.
First do: time.sleep(10) this will allow the page to load so that the table can actually be retrieved. then just append all the cells to a new list. YOU WILL NEED MULTIPLE LISTS to fit all the rows.
So basically you can use find_elements_by_tag_name
and use this code
row = driver.find_elements_by_tag_name("tr")
data = driver.find_elements_by_tag_name("td")
print('Rows --> {}'.format(len(row)))
print('Data --> {}'.format(len(data)))
for value in row:
print(value.text)
Have proper wait to populate the data.
I'm trying to perform calculations based on the entries in a pandas dataframe. The dataframe looks something like this:
and it contains 1466 rows. I'll have to run similar calculations on other dfs with more rows later.
What I'm trying to do, is calculate something like mag='(U-V)/('R-I)' (but ignoring any values that are -999), put that in a new column, and then z_pred=10**((mag-c)m) in a new column (mag, c and m are just hard-coded variables). I have other columns I need to add too, but I figure that'll just be an extension of the same method.
I started out by trying
for i in range(1):
current = qso[:]
mag = (U-V)/(R-I)
name = current['NED']
z_pred = 10**((mag - c)/m)
z_meas = current['z']
but I got either a Series for z, which I couldn't operate on, or various type errors when I tried to print the values or write them to a file.
I found this question which gave me a start, but I can't see how to apply it to multiple calculations, as in my situation.
How can I achieve this?
Conditionally adding calculated columns row wise are usually performed with numpy's np.where;
df['mag'] = np.where(~df[['U', 'V', 'R', 'I']].eq(-999).any(1), (df.U - df.V) / (df.R - df.I), -999)
Note; assuming here that when any of the columns contain '-999' it will not be calculated and a '-999' is returned.
I am reading a table where all its values has to be validated before we process it further. The valid values are stored in another table that we match our main table with. The validation criteria is to match several columns as follows:
Table 1 (the main data we read in)
Name --- Unit --- Age --- Address --- Nationality
The above shows the column names that we are reading from the table and the other table contains the valid values of the above columns . When we look only for valid values in our main table, we have to consider combination of columns in the main data table, for example Name --- Unit --- Age. If all the value in a particular row for the column combination matches against the other table then we keep the row, otherwise we delete it.
How do I address the issue with Numpy ?
Thanks
you can just loop through rows. An easy/simple way would be:
dummy_df = table_df ## make a copy of your table, since we are deleting rows we want to have the original df saved.
relevant_columns = ['age','name','sex',...] ## define relevant columns, in case either dataframe has columns you dont want to compare on
for indx in dummy_df.index :
## checks if any row is identical, if so, drops it.
if ((np.array(dummy_df.loc[indx][relevant_columns]) == main_df[relevant_columns].values).sum(1) == len(relevant_columns)).sum() > 0:
dummy_df = dummy_df .drop(indx)
ps: i am assuming the data is in pandas dataframe format.
hope it helps :)
ps2: if the headers/columns have different names it wont work
There is an inconsistency with dataframes that I cant explain. In the following, I'm not looking for a workaround (already found one) but an explanation of what is going on under the hood and how it explains the output.
One of my colleagues which I talked into using python and pandas, has a dataframe "data" with 12,000 rows.
"data" has a column "length" that contains numbers from 0 to 20. she wants to divided the dateframe into groups by length range: 0 to 9 in group 1, 9 to 14 in group 2, 15 and more in group 3. her solution was to add another column, "group", and fill it with the appropriate values. she wrote the following code:
data['group'] = np.nan
mask = data['length'] < 10;
data['group'][mask] = 1;
mask2 = (data['length'] > 9) & (data['phraseLength'] < 15);
data['group'][mask2] = 2;
mask3 = data['length'] > 14;
data['group'][mask3] = 3;
This code is not good, of course. the reason it is not good is because you dont know in run time whether data['group'][mask3], for example, will be a view and thus actually change the dataframe, or it will be a copy and thus the dataframe would remain unchanged. It took me quit sometime to explain it to her, since she argued correctly that she is doing an assignment, not a selection, so the operation should always return a view.
But that was not the strange part. the part the even I couldn't understand is this:
After performing this set of operation, we verified that the assignment took place in two different ways:
By typing data in the console and examining the dataframe summary. It told us we had a few thousand of null values. The number of null values was the same as the size of mask3 so we assumed the last assignment was made on a copy and not on a view.
By typing data.group.value_counts(). That returned 3 values: 1,2 and 3 (surprise) we then typed data.group.value_counts.sum() and it summed up to 12,000!
So by method 2, the group column contained no null values and all the values we wanted it to have. But by method 1 - it didnt!
Can anyone explain this?
see docs here.
You dont' want to set values this way for exactly the reason you pointed; since you don't know if its a view, you don't know that you are actually changing the data. 0.13 will raise/warn that you are attempting to do this, but easiest/best to just access like:
data.loc[mask3,'group'] = 3
which will guarantee you inplace setitem