Why does my text value not appear in matlab gui? - sql

I've managed to retrieve 5 string values from my database whereby results = 'something1' 'something2' 'something3' 'something4' 'something5'. Now I want these values to display in my edit texts Matlab GUI. How do I do that? How to pass all values from results = curs.Data; to all 5 different set(handles.edit1,'String');?
%Assign data to output variable
results = curs.Data;
display(results);
%Diplay in edit texts matlab gui
set(handles.edit1,'String');
set(handles.edit2,'String');
set(handles.edit3,'String');
set(handles.edit4,'String');
set(handles.edit5,'String');

If results is a cell array then simply do:
set(handles.edit1,'String',results{1});
and repeat for each string. Or, if you wish, you can use arrayfun:
arrayfun(#(k) eval(['set(handles.edit' num2str(k) ',''String'',results{' num2str(k) '}); ']),1:5);

Related

Pandas Replace_ column values

Hello,
I am analyzing the next dataset with this information .
The column ['program_number'] is an object but I want to change it to a integer colum.
I have tried to replace some values but it doesn´t work.
as you can see, some values like 6 is duplicate. like '6 ' and 6.
How can I resolve it? Many thanks
UPDATE
Didn't see 1X and 3X at first.
If you need those numbers and just want to remove the X then:
df["Program"] = df["Program"].str.strip(" X").astype(int)
If there is data in the column which aren't numbers or which shouldn't be converted, you can use pd.to_numeric with errors='corece'. If there are cells which can't be converted, you'll get NaN. Be aware that this will result in floating numbers.
df["Program"] = pd.to_numeric(df["Program"], errors="coerce")
old
You want to use str.strip() here, rather than replace.
Try this:
df1['program_number'] = df1['program_number'].str.strip().astype(int)

Python selenium get table values into List of Lists

I'm just trying to get the data from this table:
https://www.listcorp.com/asx/sectors/materials
and put all the values (the TEXT) into a list of lists.
I've tried so many different methods using--> xpath, getByClassName, By.tag
------------
rws = driver.find_elements_by_xpath("//table/tbody/tr/td")
---------------
table = driver.find_element_by_class_name("v-datatable v-table theme--light")
--------------
findElements(By.tagName("table"))
--------------
# to identify the table rows
l = driver.find_elements_by_xpath ("//*[#class= 'v-datatable.v-
table.theme--light']/tbody/tr")
# to get the row count len method
print (len(l))
# THIS RETURNS '1' which cant be right because theres hundreds of rows
And nothing seems to work to get the values in an easy way to understand the manner.
(EDIT SOLVED)
before doing the SOLVED solution below.
First do: time.sleep(10) this will allow the page to load so that the table can actually be retrieved. then just append all the cells to a new list. YOU WILL NEED MULTIPLE LISTS to fit all the rows.
So basically you can use find_elements_by_tag_name
and use this code
row = driver.find_elements_by_tag_name("tr")
data = driver.find_elements_by_tag_name("td")
print('Rows --> {}'.format(len(row)))
print('Data --> {}'.format(len(data)))
for value in row:
print(value.text)
Have proper wait to populate the data.

Pandas.dropna method can't delete Nan value rows(or columns)

I now have some data, it's may contain null values
I want to delete it's null value (a whole row or a whole column)
How can I deal with the comparison?
Here is my data
https://reurl.cc/5lONv6
it will have some null values ​​in the time series data
following is my code
c=pd.read_csv('./in/historical_01A190.txt',error_bad_lines=False)
c.dropna(axis=0,how='any',inplace=True)
c.dropna(axis=1,how='any',inplace=True)
c.to_csv('./out/historical_01A190.txt',index=False)
but it's didn't work
anyone can help me?
Okay, first of all, your data isn't saved as a csv. It's saved as a tab-separated file.
So you need to open it using pd.read_table
>>> c=pd.read_table('./data.txt',error_bad_lines=False,sep='\t')
Second, your data is full of nans -- if you use dropna on either rows or columns, you end up with just one row or column (dates) left. But using the correct opener on your file, the dropna and to_csv functions work.
If you don't assing the variable then it will only create a view which is not stored in memory.
c = c.dropna(axis=0,how='any',inplace=True)
c = c.dropna(axis=1,how='any',inplace=True)
c = c.to_csv('./out/historical_01A190.txt',index=False)
Try this.

Count items in Infopath field

I created a form in Infopath with a rich text box field. The field is used to keep a list of usernames (first and last). I want to be able to keep a of count each entry and keep a tally. I then want to use that total # of entries to add or subtract from other fields. Is there any way to do that?
Is the rich text box field just a large string? If so you could just use python's built in split function, and either split by ("\r\n"), or (",").
Example:
u = "Bob, Michael, Jean"
x = u.split(",")
X will be a list of usernames. If you are using line breaks for each new username, then replace (",") with ("\r\n").
Now to count the items in a list you just need to iterate on the list you created with a for loop.
Example:
b = 0
u = "Bob, Michael, Jean"
x = u.split(",")
for i in x:
b += 1 // b will be the number of usernames

How to access columns by their names and not by their positions?

I have just tried my first sqlite select-statement and got a result (an iterator over tuples). So, in other words, every row is represented by a tuple and I can access value in the cells of the row like this: r[7] or r[3] (get value from the column 7 or column 3). But I would like to access columns not by their positions but by their names. Let us say, I would like to know the value in the column user_name. What is the way to do it?
I found the answer on my question here:
cursor.execute("PRAGMA table_info(tablename)")
print cursor.fetchall()