Why is Power BI Rounding up my values? - formatting

Why is Power BI doing this to my values? (see video below) It is setting them to 1.00 in the visualization even though they are 99.61, 99.74, etc. in the query table. What is happening? I have tried setting the type to percentage, decimal, and fixed decimal and the same thing always happens. Also the values are set to "don't summarize" in the visualization table.
https://www.youtube.com/watch?v=bNHzelJTW7g&feature=youtu.be
See video to understand what I'm talking about.
Here are a couple screenshots from the video:

In your query editor you have the following values:
99.79%, 99.91%, 99.74%, 99.82%, 99.74%, 99.61%
These are in percent format as is clear by the "%" symbol next to your column name.
When you close and load, you put in in a table which is not formatted as a percent and shows only two decimal places. When rounded to two decimal places the value rounds up to 1.00 for all of these. (Note that your total rounds to 5.99 though.)
If you want them formatted as percentages, use the modeling tab to set the format for the column. (The format you set in the query editor doesn't necessarily carry through to your visualizations.)

Click on your visual, then, in Format visual, go to call-out value and change display units to none and change Value decimal places to the amount of decimals you need.

Related

Correct type of data for latitude and longtitude SSIS ETL process

I'm trying to convert and upload latitude and longitude data into a database through an ETL process I created where we take the source data from a .csv file and convert it to DECIMAL. Here you have an example of what the two values look like:
Latitude (first column): 41.896585191199556
Longitude (second column):-87.66454238198166
I set the data type on the database as for:
Latitude DECIMAL(10,8)
Longitude DECIMAL(11,8)
The main problem arises when I try to convert data from file to database and then I get the message
[Flat File Source [85]] Error: Data conversion failed. The data conversion for column "Latitude" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
View of my process:
When trying to ignore the error Latitude and Longitude values in the database are changed to NULL... The flat file encoding is 65001.
I tried doing conversions for data types: float, DECIMAL, int and nothing helped.
My questions are:
what data type for these above values should I use in the target database.
what data type should i choose on input for flat file ?
what data type to set for conversion (I suspect the one we will have on the database) ?
please note that some records in the file are missing the location
view from Data:
view from Data Conversion:
UPDATE
When FastParse is run I receive an error message as below:
What data type should I choose in this case ? I set everything up as #billinkc suggested. When I set an integer, for example DT_I4, it results in NULL and the same error as before (in this message there is no possibility to select some data type for the value of Latitude, i.e. DECIMAL or STRING).
You need DECIMAL(11,8). That has three digits before the decimal place and either digits after.
The conversion failure is no doubt happening when you have longitudes above 100 or less than -100.
The error reported indicates the failure point is the Flat File Source
[Flat File Source [85]] Error: Data conversion failed. The data conversion for column "Latitude" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
I'm on a US locale machine so you could be running into issues with the decimal separator. If that's the case, then in your Flat File Source, right click and select Show Advanced Editor. Go to Input and Output Properties, and under the Flat File Source Output, expand Output Columns and for each column that is a floating point number, check the FastParse option.
If that works, great, you have a valid Flat File Source.
I was able to get this working two different ways. I defined two Flat File Connection Managers in my package: FFCM Dec and FFCM String While I prefer to minimize the number of operations and transforms I apply to my packages, declaring the data types as strings can help you get past the hurdle of "I can't even get my data flow to start because of bad data"
Source data
I created a CSV saved as UTF-8
Latitude,Longitude
41.896585191199556,-87.66454238198166
FFCM Dec
I configured a standard CSV
I defined my columns with the DataType of DT_DECIMAL
FFCM String
Front page is the same but on the columns in the Advanced section, I left the data type as DT_WSTR with a length of 50
At this point, we've defined the basic properties of how the source data is structured.
Destination
I went with consistency on the size for the destination. You're not going to save anything by using 10 vs 11 and I'm too lazy to look up the allowable domain for lat/long numbers
CREATE TABLE dbo.SO_65909630
(
[Latitude] decimal(18,15)
, [Longitude] decimal(18,15)
)
Data Flow
I need to run but you either use the correctly typed data when you bring it in (DFT DEC) or you transform it.
The blanks I see in your source data will likely need to be dealt with (either you have a column that needed to be escaped or there is no data - which will cause the data conversion to fail so I'd advocate this approach
Row counts are there just to provide a place to put a data viewer while I was building the answer
What data type should I use for lat and long
Decimal is an exact data type so it will store the exact value you supply. When used it takes the form of decimal(scale, precision). Before my current role, I had never used any other data type for non-whole numbers.
Books On Line on decimal and numeric (Transact-SQL) https://learn.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-ver15
Scale
The maximum total number of decimal digits to be stored. This number includes both the left and the right sides of the decimal point. The precision must be a value from 1 through the maximum precision of 38. The default precision is 18.
Precision
The number of decimal digits that are stored to the right of the decimal point. This number is subtracted from p to determine the maximum number of digits to the left of the decimal point. Scale must be a value from 0 through p, and can only be specified if precision is specified. The default scale is 0 and so 0 <= s <= p. Maximum storage sizes vary, based on the precision.
Precision Storage bytes
1 - 9 5
10-19 9
20-28 13
29-38 17
For the table I defined above, it will cost us 18 bytes (2 * 9) for each lat/long to store.
But let's look at the actual domain for latitude and longitude (on Earth) This magnificent answer on GIS.se is printed out and hangs from my work monitor https://gis.stackexchange.com/questions/8650/measuring-accuracy-of-latitude-and-longitude
Pasting the relevant bits here
The sixth decimal place is worth up to 0.11 m: you can use this for laying out structures in detail, for designing landscapes, building roads. It should be more than good enough for tracking movements of glaciers and rivers. This can be achieved by taking painstaking measures with GPS, such as differentially corrected GPS.
The seventh decimal place is worth up to 11 mm: this is good for much surveying and is near the limit of what GPS-based techniques can achieve.
The eighth decimal place is worth up to 1.1 mm: this is good for charting motions of tectonic plates and movements of volcanoes. Permanent, corrected, constantly-running GPS base stations might be able to achieve this level of accuracy.
The ninth decimal place is worth up to 110 microns: we are getting into the range of microscopy. For almost any conceivable application with earth positions, this is overkill and will be more precise than the accuracy of any surveying device.
Ten or more decimal places indicates a computer or calculator was used and that no attention was paid to the fact that the extra decimals are useless. Be careful, because unless you are the one reading these numbers off the device, this can indicate low quality processing!
Your input values show more than 10 digits of precision so I'm guessing it's a calculated value and not a "true observation". That's good, that gives us more wiggle room to work with.
Why, we could dial that decimal declaration down the following for half* the storage cost of the first one
CREATE TABLE dbo.SO_65909630_alt
(
[Latitude] decimal(8,5)
, [Longitude] decimal(8,5)
);
Well that's good, we've stored the "same" data at lower the cost. Maybe your use case is just "where are my stores" and even if you're Walmart with under 12000 stores, who cares? That's a trivial cost. But if you need to also store the coordinates of their customers, the storage cost per record might start to matter. Or use Amazon or Alibaba or whatever very large consumer retailer exists when you read this.
In my work, I deal with meteorological data and it comes in all shapes and sizes but a common source for me is Stage IV data It's just hourly rainfall amounts across the contiguous US. So 24 readings per coordinate, per day. Coordinate system is 1121 x 881 (987,601 points) so expressing hourly rainfall in the US for a day is 23,702,424 rows. The difference between 18 bytes versus 10 bytes can quickly become apparent given that Stage IV data is available back to 2008.
We actually use a float (or real) to store latitude and longitude values because it saves us a 2 bytes per coordinate.
CREATE TABLE dbo.SO_65909630_float
(
[Latitude] float(24)
, [Longitude] float(24)
);
INSERT INTO dbo.SO_65909630_alt
(
Latitude
, Longitude
)
SELECT * FROM dbo.SO_65909630 AS S
Now, this has caused me pain because I can't use an exact filter in queries because of the fun of floating point numbers.
My decimal typed table has this in it
41.89659 -87.66454
And my floating type table has this in it
41.89658 -87.66454
Did you notice the change to the last digit in Latitude? 8 not 9 as the decimal table has but either way, it doesn't matter
SELECT * FROM dbo.SO_65909630_float AS S WHERE S.Latitude = 41.89658
This won't find a row because of floating point rounding exact match nonsense. Instead, your queries become very tight range queries, like
SELECT * FROM dbo.SO_65909630_float AS S WHERE S.Latitude >= (41.89658 - .00005) AND S.Latitude <= (41.89658 + .00005)
where .00005 is a value that you'll have to experiment with given your data to find out how much you need to adjust the numbers to find it again.
Finally, for what it's worth, if you convert lat and long into the Geography Point it's going to coerce the input data type to float as it is.

Inserting decimals as 0.00000000

I've got an issue when inserting double values into an ms-access database.
I've set the field, size, to be Currency type with 7 decimal places.
In my code, I have the following line to add the value to the query
cmd.Parameters.Add("#size", OleDbType.Double).Value = CDbl(txt_size.Text)
When debugging, I can see the value in the #size parameter is 0.000008, which is what I typed into the text box.
Yet, when I view the record in access after the query has run, it shows as 0.0000000, and therefore when viewing the value in the application is shows as 0.0000 as well.
Why is it rounding the value down? Do I need to change something in Access to allow such small numbers?
The currency data type doesn't support values that precise.
See this page for a description of the currency type. It supports 4 decimals.
In formatting, you can of course increase the amount of decimals displayed, but that doesn't increase the size of the field.
If possible, I'd change the field to a double precision float or a decimal field (data type Number, field size Decimal). Both these types support higher precision than currency.

precision gains where data move from one table to another in sql server

There are three tables in our sql server 2008
transact_orders
transact_shipments
transact_child_orders.
Three of them have a common column carrying_cost. Data type is same in all the three tables.It is float with NUMERIC_PRECISION 53 and NUMERIC_PRECISION_RADIX 2.
In table 1 - transact_orders this column has value 5.1 for three rows. convert(decimal(20,15), carrying_cost) returns 5.100000..... here.
Table 2 - transact_shipments three rows are fetching carrying_cost from those three rows in transact_orders.
convert(decimal(20,15), carrying_cost) returns 5.100000..... here also.
Table 3 - transact_child_orders is summing up those three carrying costs from transact_shipments. And the value shown there is 15.3 when I run a normal select.
But convert(decimal(20,15), carrying_cost) returns 15.299999999999999 in this stable. And its showing that precision gained value in ui also. Though ui is only fetching the value, not doing any conversion. In the java code the variable which is fetching the value from the db is defined as double.
The code in step 3, to sum up the three carrying_costs is simple ::
...sum(isnull(transact_shipments.carrying_costs,0)) sum_carrying_costs,...
Any idea why this change occurs in the third step ? Any help will be appreciated. Please let me know if any more information is needed.
Rather than post a bunch of comments, I'll write an answer.
Floats are not suitable for precise values where you can't accept rounding errors - For example, finance.
Floats can scale from very small numbers, to very high numbers. But they don't do that without losing a degree of accuracy. You can look the details up on line, there is a host of good work out there for you to read.
But, simplistically, it's because they're true binary numbers - some decimal numbers just can't be represented as a binary value with 100% accuracy. (Just like 1/3 can't be represented with 100% accuracy in decimal.)
I'm not sure what is causing your performance issue with the DECIMAL data type, often it's because there is some implicit conversion going on. (You've got a float somewhere, or decimals with different definitions, etc.)
But regardless of the cause; nothing is faster than integer arithmetic. So, store your values are integers? £1.10 could be stored as 110p. Or, if you know you'll get some fractions of a pence for some reason, 11000dp (deci-pennies).
You do then need to consider the biggest value you will ever reach, and whether INT or BIGINT is more appropriate.
Also, when working with integers, be careful of divisions. If you divide £10 between 3 people, where does the last 1p need to go? £3.33 for two people and £3.34 for one person? £0.01 eaten by the bank? But, invariably, it should not get lost to the digital elves.
And, obviously, when presenting the number to a user, you then need to manipulate it back to £ rather than dp; but you need to do that often anyway, to get £10k or £10M, etc.
Whatever you do, and if you don't want rounding errors due to floating point values, don't use FLOAT.
(There is ALOT written on line about how to use floats, and more importantly, how not to. It's a big topic; just don't fall into the trap of "it's so accurate, it's amazing, it can do anything" - I can't count the number of time people have screwed up data using that unfortunately common but naive assumption.)

How to simplify big numeric input from user? [Objective C]

I've building a very basic iphone app where the user will be able to enter or select a very large numeric cash value (usually in the thousands or millions).
At the moment I am using a simple text box entry with number pad selected.
I am going to use the example of a Football transfer fee as an analogy.
A transfer fee can be in many millions and I really do not want the user to be mis-typing zero's, or getting frustrated with the number of zero's they have to enter.
In addition, as the text box/numeric cash value is not displayed with any currency formatting it makes it very unintuitive to know just how much you are entering.
In this thread I have a way of displaying big numbers on the screen; you'll also notice the numbers are formatted in chunks (ie: 2.25m, 2m, 7.25m, etc) -- it makes the process more streamlined and is more visually intuitive.
But what I am unsure about is how to make it easy for the user to enter big numbers without typing stupidly long zeros every time.
Possible solution 1 -- Use a UIPickerView with 3+ segments for each of the units.
Problem -- it won't handle smaller numbers properly, also you may get weird looking numbers like 1.15k which although correct is not what I want to display.
Possible solution 2 -- Use a +/- button to allow a user to simply increase/decrease the number by a factor of 250 or 500. This is the simplest answer, but its not as elegant as a UIPickerView
If there is another way to do this, a way to simplify the input of big numeric numbers from a user, I'd be interested.
You could add formatted output right above or below the text field. As they enter numbers, update the formatted field adding currency symbols, commas and decimals. Not the most elegant way to do this, but it would be simple to implement, and intuitive to the user.

What T-SQL data type would you typically use for weight and length?

I am designing a table that has several fields that will be used to record weight and lengths.
Examples would be:
5 kilograms and 50 grams would be stored as 5.050.
2 metres 25 centimetres would be stored as 2.25.
What T-SQL data type would be best suited for these?
Some calculation against these fields will be required but using a default decimal(18,0) seems overkill.
It really depends on the range of values you intend to support. You should use a decimal value that covers this range.
For example for the weight, it looks like you want three decimal places. Say you want the maximum to be 1000kg then you need a precision of 7 digits, 3 being behind the decimal point. This gives you decimal(7,3)
Don't forget to put the units of measure in the column name, e.g. WeightInKilos, LengthInMetres
The best datatype depends on the range and the precision of weights and lengths you'd like to store. For storing people's weight, that would be between 0.00 and 1000.00 kilograms. So you'd need 6 digits most (precision=6), with 2 numbers behind the dot (scale=2). That's a decimal:
weight decimal(6,2)
For normal (non-scientific) use, I'd avoid the approximate number formats float and real. They have some surprising gotcha's, and it's hard for end users to reproduce the results of a calculation.
You need to take inventory of what things you are measuring. If they are measurements of UI windows then integers of pixels would be just fine. But that would not work for holding the measurement of the mass of a proton. It is the old tale of scale and precision.
The easiest solution might be to standardize them all to one unit of measure. As harriyott said you could add that to your column name. I'm not a huge fan of that due to flexibility in refactoring later if requirements or designs change, but it is an option)
If these measurements are wide open and general such that you need to support very large to very small numbers, maybe the measurement could be split into two columns. One to hold the magnitude and one to hold the unit of measure. One of the biggest downfalls to this would be comparing values if you need to find the heaviest objects, etc. It can be done with a lookup table, but certainly adds a level of complexity.