PyQt5 - Make integers and dates in QTableWidget properly sortable - pyqt5

I put the data into my QTableWidget tableView using a loop:
for i in range(0, len(res)):
self.tableView.setItem(i, 2, QTableWidgetItem(str(create_dataframe(res)[2][i])))
self.tableView.setItem(i, 3, QTableWidgetItem(str(create_dataframe(res)[3][i])))
where create_dataframe(res)[2][i] returns value of class 'int' and create_dataframe(res)[3][i] returns value of class 'datetime.datetime' (like '2017-03-25 16:51:24'). The question is: how do I make these items properly sortable through self.tableView.setSortingEnabled(True), i.e. not as strings, but as integers and datetime respectively? I know that I should use setData and Qt.DisplayRole, but could you please give an example using this short piece of code?
Thank you.

OK, here is the answer I came up with by myself:
self.tableView.setItem(i, 2, QTableWidgetItem(str(create_dataframe(res)[2][i])))
it3 = QTableWidgetItem()
it3.setData(Qt.EditRole, QVariant(create_dataframe(res)[3][i]))
self.tableView.setItem(i, 3, it3)
I.e. there is no need to transform datetime value: it can be properly sorted in string form. As for integer values, I have to create an instance of QTableWidgetItem(), then use .setData with QVariant on it. After this I can setItem to the table.

Related

Finding smallest dtype to safely cast an array to

Let's say I want to find the smallest data type I can safely cast this array to, to save it as efficiently as possible. (The expected output is int8.)
arr = np.array([-101,125,6], dtype=np.int64)
The most logical solution seems something like
np.min_scalar_type(arr) # dtype('int64')
but that function doesn't work as expected for arrays. It just returns their original data type.
The next thing I tried is this:
np.promote_types(np.min_scalar_type(arr.min()), np.min_scalar_type(arr.max())) # dtype('int16')
but that still doesn't output the smallest possible data type.
What's a good way to achieve this?
Here's a working solution I wrote. It will only work for integers.
def smallest_dtype(arr):
arr_min = arr.min()
arr_max = arr.max()
for dtype_str in ["u1", "i1", "u2", "i2", "u4", "i4", "u8", "i8"]:
if (arr_min >= np.iinfo(np.dtype(dtype_str)).min) and (arr_max <= np.iinfo(np.dtype(dtype_str)).max):
return np.dtype(dtype_str)
This is close to your initial idea:
np.result_type(np.min_scalar_type(arr.min()), arr.max())
It will take the signed int8 from arr.min() if arr.max() fits inside of it.

Formatting column in pandas to decimal places using table.style based on value

I am trying to format a column in a dataframe using style.
So far I successfully used the styling for a fixed number of decimals:
mytable.style.format('{:,.2f}', pd.IndexSlice[:, ['Price']])
but I need to expand this to formatting based on value as this:
if value is >=1000, then format to zero decimal places
if value is between 1000 and 1, then format to two decimal places
if value is < 1, then format to five decimal places
Does anyone have a solution for this?
Thank you!
Building upon #Code_beginner's answer – the callable should return formatted string as output:
def my_format(val):
if val >= 1000:
return f"{val:,.0f}"
if val >= 1:
return f"{val:,.2f}"
return f"{val:,.5f}"
mytable.style.format({'Price': my_format})
What you are looking for is called "conditional formatting". In which you set the conditions like you described and return the right format. There are examples in the documentation, they used a lambda function there. But you can also create a normal function which might look something like this:
def customfunc(val):
if val>=1000:
format='{:,.0f}'
if val<1000 and val>=1:
format='{:,.2f}'
if val<1:
format='{:,.5f}'
return format
df.style.format({0:customfunc})
This should style your first column like described in your problem. If the columns has a name you have to adjust it accordingly. If you have trouble see the documentation linked abve there are more examples.
Just to have it visually clear, this is how it looks now:
That's my line of code:
df.style.format({'Price': customfunc})

Django - Query: how to decode integer field value with its string value 'on the fly'

maybe it is not good practice but I would like to know if it is possible to decode an integer field value with its string value
I know I should have a specific models linked as thesaurus...
I have a models mymodel
I produce a QuerySet mymodel.objects.values('id','var2','var3','var4') for example
var3 is an integer (0/1) for Yes/No answer
Is it possible to populate my QuerySet with Yes or No instead of its integer value?
You can annotate it, for example:
from django.db.models import Case, CharField, Value, When
QuerySet.objects.values('id', 'var2', 'var4',
new_var3=Case(
When(var3=1, then=Value('Yes')),
default=Value('No'),
output_field=CharField(),
)
)
Note that you however should rename the variable.
That being said, it is not a good idea to use .values(..) or .values_list(..) to eprform serialization. Furthermore it will perform text processing on the database, and often that is not really the core task of a database. Normally serializers [drf-doc] are used for that.

How can I change column data type from float to string in Julia?

I am trying to get a column in a dataframe form float to string. I have tried
df = readtable("data.csv", coltypes = {String, String, String, String, String, Float64, Float64, String});
but I got complained
syntax: { } vector syntax is discontinued
I also have tried
dfB[:serial] = string(dfB[:serial])
but it didn't work either. So, I'd like to know what would be the proper approach to change column data type in Julia.
thx
On your first attempt, Julia tells you what the problem is - you can't make a vector with {}, you need to use []. Also, the name of the keyword argument should be eltypes rather than coltypes.
On the second try, you don't have a float, you have a Vector of floats. So to change the type you need to change the type of all elements. In Julia, elementwise operations on vectors are generalized by the 'dot' syntax, e.g. string.(collect(dfB[:serial])) . The collect is needed currently to cast the DataArray to a normal Array first – this will fail if the DataArray contains NAs. IMHO the DataFrames interface is still rather wonky, so expect a few headaches like this ATM.

in Golang, can we declare some string value to variable name?

I have slices with some variable names
like
strList := ['abcd', 'efgh', 'ijkl']
and I want to make it to variables names(to make some object iterably)
What I curious is that how can I make strings value to variable name. (in code)
like strList[0] seems not allowed....
Thanks for your help!
Since your strings will be read at runtime and your variable names will be checked at compile time, it's probably not possible to actually create a variable with a name based on a string.
However, you can make a map that stores values with string keys. For example, if you wanted to hold integer values inside something you can look up using the values "abcd", "efgh", etc., you would declare:
myMap := map[string]int {
"abcd": 1,
"efgh": 2,
"ijkl": 3,
}
and you could then read those values with e.g. myMap["abcd"] // 1.
I think you want something like http://play.golang.org/p/M_wHwemWL6 ?
Note that the syntax for a slice literal uses {}'s not []'s.