I'm using Power Pivot to connect to data stored in a folder. The data is stored as csv files. I'm using Power Query to format the data and connect to Power Pivot...
I am having issues with my Time columns which export with time in the following format 02:33:00 PM... When I load Power Pivot and create a Pivot Table to get the average time I get an popup: "we cannot summarize this field with Average because it's not a supported calculation for Date data types"
My question is how can I convert these columns so that I can get average time (apparently they are being stored as dates even though in Power Query I have them as Time Only.
Thanks
Related
My issue is similar to this one Multiple data types in a Power BI matrix but I've got a bit of a different setup that's throwing everything off.
What I'm trying to do is create a matrix table with several metrics that are categorized as Current (raw data values) and Prior (year over year percent growth/decline). I've created some dummy data in Excel to get the format the way I want it in PowerBI (see below):
Desired Format
As you can see the Current values are coming in as integers and the Prior % numbers as percentages which is exactly what I want; however, I was able to accomplish this through a custom column with the following formula:
Revenue2 = IF(Scorecard2[Current_Prior] = "Current", FORMAT(FIXED(Scorecard2[Revenue],0), "$#,###"), FORMAT(Scorecard2[Revenue], "Percent"))
The problem is that the data comes from a SQL query and you can't use the FORMAT() function in DirectQuery. Is there a way I can have two different datatypes in the same column of data? See below for how the SQL data comes into PowerBI (I can change this if need be):
SQL
Create 2 separate measures, one for the Current second for Prior, and format these measures.
Probably you can also use a case in SQL query to format your data to bring it as STRING.
What I wound up doing was reformatting the SQL code to look like this:
Solution
That way Current/Prior are have two separate values and the "metric" is categorical.
I got the idea from this post:
Simple way to transpose columns and rows in SQL?
I have a table in google bigquery with 1.4mil records and parcel number as a unique field, I need to be able to extract the data as a csv.
However, when I explore in data studio and break it down by parcel, data studio puts a limit of exactly 1.1mil records, even worse, when I export it as a .csv there are only 750k lines.
Is there a limit in data studio?
Please help!!
Yes. Currently (March 2019), there's a limit of ~1m rows when fetching data from BigQuery.
If you are trying to extract 1m+ rows as CSV, ideally, you should be doing it from the BigQuery end. See Exporting BigQuery table data. Data Studio should work as a data exploration tool on top of BigQuery.
I have a view that quickly returns 28000 rows of data within 3 seconds. However, when I use this view to create SSRS Matrix (pivot) report, it takes almost 2 minutes to run.
More detail about the view:
Gets data from a linked server
Only about 10 columns with date field and amounts (Date field is what I use pivot on in SSRS to get Amount total)
What I have tried so far:
Dumped view into a temp table
Added OPTION (RECOMPILE);
The report is very simple. Without any parameters. This is one of those reports that users can run and do a data dump into excel before importing it into another system.
Any suggestions?
I would look into doing as much of the aggregation as you can on the server, if that's what's taking the time, especially as it sounds like a relatively static report. Give the data to SSRS in a state where it has to do as little work as possible.
If your query then takes up to two minutes to run on SQL Server, you could look into performance tuning, indexing, etc.
I am building a database for some data to build a cube (SSAS) after. Until now everything is fine and works but I want to modify my time dimension. Since now I have a table with year, month and day and use it as dimension. But for my use it would be nice if the hierarchy is not just like (2016-->5-->20); instead I want to show month.year at the month level (in this example: 05.2016). I had no problems separating the date but I can`t find a solution to show this part or to combine the two columns in SSIS. Is there any possibility to do so or can I create this in SSAS while setting up the cube?
What I´ve found out is that with the cast and datepart command I can show the things I want in the SQL Server Manager but I am a newbie in MS SQL and don`t know how to save the calculations in a new column.
Add a MonthYear column to your dimension table and populate it with a derived column transformation in SSIS that concatenates the month and year columns.
I am currently entering data into a SQL Server database using SSIS. The plan is for it to do this each week but the day that it happens may differ depending on when the data will be pushed through.
I use SSIS to grab data from an Excel worksheet and enter each row into the database (about 150 rows per week). The only common denominator is the date between all the rows. I want to add a date to each of the rows on the day that it gets pushed through. Because the push date may differ I can't use the current date I want to use a week from the previous date entered for that row.
But because there are about 150 rows I don't know how to achieve this. It would be nice if I could set this up in SQL Server where every time a new set of rows are entered it adds 7 days from the previous set of rows. But I would also be happy to do this in SSIS.
Does anyone have any clue how to achieve this? Alternatively, I don't mind doing this in C# either.
Here's one way to do what you want:
Create a column for tracking the data entry date in your target table.
Add an Execute SQL Task before the Data Flow Task. This task will retrieve the latest data entry date + 7 days. The query should be something like:
select dateadd(day,7,max(trackdate)) from targettable
Assign the SQL result to a package variable.
Add a Derived Column Transformation between your Source and Destination components in the Data Flow Task. Create a dummy column to hold the tracking date and assign the variable to it.
When you map the Excel to table in a Data Flow task, map the dummy column created earlier to the tracking date column. Now when you write the data to DB, your tracking column will have the desired date.
Derived Column Transformation