I have a formula in my awk script which outputs non-integer numbers with variable number of decimals. So, I was wondering how I can save the outputs with a certain number of decimals, say for example 2, in an array. As an example:
awk 'BEGIN{for(i=1;i<10;i++){array[3/i]}}'
You can use sprintf():
awk 'BEGIN{for(i=1;i<10;i++){array[sprintf("%.2f", 3/i)]}}'
This will create an array with the following indexes:
1.00
0.50
0.33
0.60
0.43
1.50
3.00
0.38
0.75
Related
For example: if I do a select preferences from stores I get this outcome:
|preferences |
|----------------------------------------------------------------------|
|"debit_rate"=>"0.00", "credit_rate_1"=>"0.01", "credit_rate_2"=>"0.02"|
|"debit_rate"=>"0.03", "credit_rate_1"=>"0.04", "credit_rate_2"=>"0.05"|
|"debit_rate"=>"0.06", "credit_rate_1"=>"0.07", "credit_rate_2"=>"0.08"|
|"debit_rate"=>"0.09", "credit_rate_1"=>"0.10", "credit_rate_2"=>"0.11"|
Is there a way for me to get this outcome?
debit_rate
credit_rate_1
credit_rate_2
0.00
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.10
0.11
It looks like you are small change from making these sting into valid json. redshift has json functions that will allow for more intelligent parsing of these strings. See https://docs.aws.amazon.com/redshift/latest/dg/json-functions.html
If you just change the '=>' to ':' and wrap the whole thing in curly braces '{}' you should be there.
Then you can cast these strings to be type SUPER and access the data by key value. See: https://docs.aws.amazon.com/redshift/latest/dg/query-super.html
I'm building a DNN predicted (0 or 1) model based on skflow with TF v0.9.
My code with TensorFlowDNNClassifier is like this. I train about 26,000 records and test 6,500 one.
classifier = learn.TensorFlowDNNClassifier(hidden_units=[64, 128, 64], n_classes=2)
classifier.fit(features, labels, steps=50000)
test_pred = classifier.predict(test_features)
print(classification_report(test_labels, test_pred))
It takes about 1 minute and gets a result.
precision recall f1-score support
0 0.77 0.92 0.84 4265
1 0.75 0.47 0.58 2231
avg / total 0.76 0.76 0.75 6496
But I got
WARNING:tensorflow:TensorFlowDNNClassifier class is deprecated.
Please consider using DNNClassifier as an alternative.
So I updated my code with DNNClassifier simply.
classifier = learn.DNNClassifier(hidden_units=[64, 128, 64], n_classes=2)
classifier.fit(features, labels, steps=50000)
It also works well. But result was not the same.
precision recall f1-score support
0 0.77 0.96 0.86 4265
1 0.86 0.45 0.59 2231
avg / total 0.80 0.79 0.76 6496
1 's precision is improved.
Of course this is a good for me, but why it is improved?
And It takes about 2 hours.
This is about 120 times slower than previous example.
Do I have something wrong? or miss some parameters?
Or is DNNClassifier unstable with TF v0.9?
I give the same answer as here. You might experience that because you used the steps parameter instead of max_steps. It was just steps on TensorFlowDNNClassifier that in reality did max_steps. Now you can decide if you really want that in your case 50000 steps or auto abort earlier.
The below SQL statement working for all the values like 40000,99 but for 0 the expected value is 0.00 but the result is .00
trim(to_char(value,'9,999,999.99')
Could you please suggest any possible solution for this,
The format should be x,xxx,xxx.xx and 0 should be displayed as 0.00
Eg:40,000 , 1,250,000.00 , 0.00
You should explicitly indicate the zero before decimal point:
trim(to_char(value,'9,999,990.99')
I'm trying to pull data stored as $24. I want to convert it from character to numeric. The input(variable-name,comma24.) function is not working for me. A sample of the data is given below.
5.35
5.78
413,000
3,280,000
5.97
6.72
5
6.53
6
4.59
4.25
5
6.38
6.41
4.1
6.56
5.45
6.07
4.28
5.54
5.87
3.88
5.53
5.65
6.47
207,000
4,935,000
4,400,000
6,765,000
2,856,000
53,690,000
You don't show your code, but for some reason I could get it work when the reading and conversion were in different data steps, but not when it was the same data step.
The following works just fine:
DATA one;
INPUT y: $24. ##;
DATALINES;
5.35 5.78 413,000 3,280,000 5.97
RUN;
DATA one;
SET one;
z = INPUT(y, comma24.);
RUN;
However if I put the calculation of z in the first data step, I was getting missing values without any error message. I have no explanation for this behavior, but hopefully the workaround will work for you as well.
im trying to learn how to made stuff with currency.
For example:
I divide 10.000$ by 12 Months, rounding with 2 decimals i have 833,33 $.
If i multiply 833,33 $ * 12 i got 9999,96 $, so there is 0.04 of possible loss.
Rounding the 9999.96 with 2 decimals of presition i got 10.000 $ but that's what i don't want since 0.04 is a loss.
Im using SQL Compact 4.0 as database, the price_month table is decimal(18,2)
Here is my code:
Dim price as Decimal = 10000
Dim pricemonth as Decimal = Math.round((price/12),2) ' 833.33
Console.Writeline(pricemonth*12) ' 9999.96
Console.Writeline(Math.round((pricemonth*12),2)) ' 10000
Any advice how to increase accuracy with currency? Thanks and have a nice day!
Don't round your calculation. Leave the original numbers untouched but when you display the answer round it so that it looks nice.