More information on the library I am using can be found here: https://pvlib-python.readthedocs.io/en/stable/.
I am taking data from a command in the pvlib library of the following form:
curve_info = pvsystem.singlediode(
photocurrent=IL,
saturation_current=I0,
resistance_series=Rs,
resistance_shunt=Rsh,
nNsVth=nNsVth,
ivcurve_pnts=100,
method='lambertw')
The resulting curve_info object is of type <class 'collections.OrderedDict'>. I can access a particular value via curve_info['i_mp'] which results in an object of type <class 'numpy.ndarray'>. I now want to add this data into a dataframe I created called conditions . I first add the column, then place the data.
conditions['i_mp'] = 0
conditions.at[index,'i_mp'] = float(curve_info['i_mp'])
The value is inserted, however, it loses all decimal places (first two columns are some other generic data associated with the row):
If I print(curve_info['i_mp']) I get decimals like 0.18177577443332454. I am not sure why the decimals are truncated here. I need this not to be.
Thanks.
Related
I am using this line of code which has the replace method to combine low frequency values in the column
psdf['method_name'] = psdf['method_name'].replace(small_categoris, 'Other')
The error I am getting is:
'to_replace' should be one of str, list, tuple, dict, int, float
So I tried to run this line of code before the replace method
psdf['layer'] = psdf['layer'].astype("string")
Now the column is of type string but the same error still appears. For the context, I am working on pandas api on spark. Also, is there a more efficient way than replace? especially if we want to do the same for more than one column.
Working with Julia 1.1:
The following minimal code works and does what I want:
function test()
df = DataFrame(NbAlternative = Int[], NbMonteCarlo = Int[], Similarity = Float64[])
append!(df.NbAlternative, ones(Int, 5))
df
end
Appending a vector to one column of df. Note: in my whole code, I add a more complicated Vector{Int} than ones' return.
However, #code_warntype test() does return:
%8 = invoke DataFrames.getindex(%7::DataFrame, :NbAlternative::Symbol)::AbstractArray{T,1} where T
Which means I suppose, thisn't efficient. I can't manage to get what this #code_warntype error means. More generally, how can I understand errors returned by #code_warntype and fix them, this is a recurrent unclear issue for me.
EDIT: #BogumiłKamiński's answer
Then how one would do the following code ?
for na in arr_nb_alternative
#show na
for mt in arr_nb_montecarlo
println("...$mt")
append!(df.NbAlternative, ones(Int, nb_simulations)*na)
append!(df.NbMonteCarlo, ones(Int, nb_simulations)*mt)
append!(df.Similarity, compare_smaa(na, nb_criteria, nb_simulations, mt))
end
end
compare_smaa returns a nb_simulations length vector.
You should never do such things as it will cause many functions from DataFrames.jl to stop working properly. Actually such code will soon throw an error, see https://github.com/JuliaData/DataFrames.jl/issues/1844 that is exactly trying to patch this hole in DataFrames.jl design.
What you should do is appending a data frame-like object to a DataFrame using append! function (this guarantees that the result has consistent column lengths) or using push! to add a single row to a DataFrame.
Now the reason you have type instability is that DataFrame can hold vector of any type (technically columns are held in a Vector{AbstractVector}) so it is not possible to determine in compile time what will be the type of vector under a given name.
EDIT
What you ask for is a typical scenario that DataFrames.jl supports well and I do it almost every day (as I do a lot of simulations). As I have indicated - you can use either push! or append!. Use push! to add a single run of a simulation (this is not your case, but I add it as it is also very common):
for na in arr_nb_alternative
#show na
for mt in arr_nb_montecarlo
println("...$mt")
for i in 1:nb_simulations
# here you have to make sure that compare_smaa returns a scalar
# if it is passed 1 in nb_simulations
push!(df, (na, mt, compare_smaa(na, nb_criteria, 1, mt)))
end
end
end
And this is how you can use append!:
for na in arr_nb_alternative
#show na
for mt in arr_nb_montecarlo
println("...$mt")
# here you have to make sure that compare_smaa returns a vector
append!(df, (NbAlternative=ones(Int, nb_simulations)*na,
NbMonteCarlo=ones(Int, nb_simulations)*mt,
Similarity=compare_smaa(na, nb_criteria, nb_simulations, mt)))
end
end
Note that I append here a NamedTuple. As I have written earlier you can append a DataFrame or any data frame-like object this way. What "data frame-like object" means is a broad class of things - in general anything that you can pass to DataFrame constructor (so e.g. it can also be a Vector of NamedTuples).
Note that append! adds columns to a DataFrame using name matching so column names must be consistent between the target and appended object.
This is different in push! which also allows to push a row that does not specify column names (in my example above I show that a Tuple can be pushed).
I'm a R user with great interest for Julia. I don't have a computer science background. I just tried to read a 'csv' file in Juno with the following command:
using CSV
using DataFrames
df = CSV.read(joinpath(Pkg.dir("DataFrames"),
"path/to/database.csv"));
and got the following error message
CSV.CSVError('error parsing a 'Int64' value on column 26, row 289; encountered '.'"
in read at CSV/src/Source.jl:294
in #read#29 at CSV/src/Source.jl:299
in stream! at DataStreams/src/DataStreams.jl:145
in stream!#5 at DataStreams/src/DataStreams.jl:151
in stream! at DataStreams/src/DataStreams.jl:187
in streamto! at DataStreams/src/DataStreams.jl:173
in streamfrom at CSV/src/Source.jl:195
in paresefield at CSV/src/paresefield.jl:107
in paresefield at CSV/src/paresefield.jl:127
in checknullend at CSV/src/paresefield.jl:56
I look at the entry indicated in the data frame: the row 287, 288 are like this 30, 33 respectively (seem to be of type Integer) and the the row 289 is 30.445 (which is of type float).
Is the problem that DataFrames filling the column with Int and stopped when it saw an Float?
Many thanks in advance
The problem is that float happens too late in the data set. By default CSV.jl uses rows_for_type_detect value equal to 100. Which means that only first 100 rows are used to determine the type of a column in the output. Set rows_for_type_detect keyword parameter in CSV.read to e.g. 300 and all should work correctly.
Alternatively you can pass types keyword argument to manually set column type (in this case Float64 for this column would be appropriate).
I'm currently trying to perform a dynamic lossless assignment in an ABAP 7.0v SP26 environment.
Background:
I want to read in a csv file and move it into an internal structure without any data losses. Therefore, I declared the field-symbols:
<lfs_field> TYPE any which represents a structure component
<lfs_element> TYPE string which holds a csv value
Approach:
My current "solution" is this (lo_field is an element description of <lfs_field>):
IF STRLEN( <lfs_element> ) > lo_field->output_length.
RAISE EXCEPTION TYPE cx_sy_conversion_data_loss.
ENDIF.
I don't know how precisely it works, but seems to catch the most obvious cases.
Attempts:
MOVE EXACT <lfs_field> TO <lfs_element>.
...gives me...
Unable to interpret "EXACT". Possible causes: Incorrect spelling or comma error
...while...
COMPUTE EXACT <lfs_field> = <lfs_element>.
...results in...
Incorrect statement: "=" missing .
As the ABAP version is too old I also cannot use EXACT #( ... )
Example:
In this case I'm using normal variables. Lets just pretend they are field-symbols:
DATA: lw_element TYPE string VALUE '10121212212.1256',
lw_field TYPE p DECIMALS 2.
lw_field = lw_element.
* lw_field now contains 10121212212.13 without any notice about the precision loss
So, how would I do a perfect valid lossless assignment with field-symbols?
Don't see an easy way around that. Guess that's why they introduced MOVE EXACT in the first place.
Note that output_length is not a clean solution. For example, string always has output_length 0, but will of course be able to hold a CHAR3 with output_length 3.
Three ideas how you could go about your question:
Parse and compare types. Parse the source field to detect format and length, e.g. "character-like", "60 places". Then get an element descriptor for the target field and check whether the source fits into the target. Don't think it makes sense to start collecting the possibly large CASEs for this here. If you have access to a newer ABAP, you could try generating a large test data set there and use it to reverse-engineer the compatibility rules from MOVE EXACT.
Back-and-forth conversion. Move the value from source to target and back and see whether it changes. If it changes, the fields aren't compatible. This is unprecise, as some formats will change although the values remain the same; for example, -42 could change to 42-, although this is the same in ABAP.
To-longer conversion. Move the field from source to target. Then construct a slightly longer version of target, and move source also there. If the two targets are identical, the fields are compatible. This fails at the boundaries, i.e. if it's not possible to construct a slightly-longer version, e.g. because the maximum number of decimal places of a P field is reached.
DATA target TYPE char3.
DATA source TYPE string VALUE `123.5`.
DATA(lo_target) = CAST cl_abap_elemdescr( cl_abap_elemdescr=>describe_by_data( target ) ).
DATA(lo_longer) = cl_abap_elemdescr=>get_by_kind(
p_type_kind = lo_target->type_kind
p_length = lo_target->length + 1
p_decimals = lo_target->decimals + 1 ).
DATA lv_longer TYPE REF TO data.
CREATE DATA lv_longer TYPE HANDLE lo_longer.
ASSIGN lv_longer->* TO FIELD-SYMBOL(<longer>).
<longer> = source.
target = source.
IF <longer> = target.
WRITE `Fits`.
ELSE.
WRITE `Doesn't fit, ` && target && ` is different from ` && <longer>.
ENDIF.
I often find myself changing the types of data in columns of my dataframes, converting between datetime and timedelta types, or string and time etc. So I need a way to check which data type each of my columns has.
df.dtypes is fine for numeric object types, but for everything else just shows 'object'. So how can I find out what kind of object?
You can inspect one of the cells to find the type.
import pandas as pd
#assume some kind of string and int data
records = [["a",1], ["b",2]]
df = pd.DataFrame(records)
df.dtypes
>0 object
>1 int64
>dtype: object
So pandas knows that column 1 is integer storage but column zero is shown as object.
df[0].dtype
>dtype('O')
This still shows "Object" storage.
type(df[0][0])
>str
Voila.
Of course, this depends on your exact data structure. If you've got NaNs anywhere in the column then it sometimes plays havoc with the converted type (havoc as in its not always clear why it ends up as object storage).