https://public.tableau.com/profile/jamesbond#!/vizhome/NiftyPERatio_16005774872610/NiftyPERatioHeatMap
How do I figure out the data source of this viz? I am trying to create my own similar visualisations, hence investigating what they are getting data.
Actually you can't for thiz viz.
The author doesn't allow download for workbook and data.
Related
I have set up a data transfer between SA360 (Search Ads 360) and BigQuery. The data for all table is importing correctly, apart from the "Conversion" table, which has "no data to display".
I have checked the transfer runs and can see that this table was imported successfully (although it does not specifcy anything else).
Would this be a query for SA360 supoprt, or should I be looking within BigQuery to resolve the issue?
Thanks in advance
Checked the transfer runs and can see that this table was imported successfully
I am trying to import a small table of data from Azure SQL into Snowflake using Azure Data Factory.
Normally I do not have any issues using this approach:
https://learn.microsoft.com/en-us/azure/data-factory/connector-snowflake?tabs=data-factory#staged-copy-to-snowflake
But now I have an issue, with a source table that looks like this:
There is two columns SLA_Processing_start_time and SLA_Processing_end_time that have the datatype TIME
Somehow, while writing the data to the staged area, the data is changed to something like 0:08:00:00.0000000,0:17:00:00.0000000 and that causes for an error like:
Time '0:08:00:00.0000000' is not recognized File
The mapping looks like this:
I have tried adding a TIME_FORMAT property like 'HH24:MI:SS.FF' but that did not help.
Any ideas to why 08:00:00 becomes 0:08:00:00.0000000 and how to avoid it?
Finally, I was able to recreate your case in my environment.
I have the same error, a leading zero appears ahead of time (0: 08:00:00.0000000).
I even grabbed the files it creates on BlobStorage and the zeros are already there.
This activity creates CSV text files without any error handling (double quotes, escape characters etc.).
And on the Snowflake side, it creates a temporary Stage and loads these files.
Unfortunately, it does not clean up after itself and leaves empty directories on BlobStorage. Additionally, you can't use ADLS Gen2. :(
This connector in ADF is not very good, I even had problems to use it for AWS environment, I had to set up a Snowflake account in Azure.
I've tried a few workarounds, and it seems you have two options:
Simple solution:
Change the data type on both sides to DateTime and then transform this attribute on the Snowflake side. If you cannot change the type on the source side, you can just use the "query" option and write SELECT using the CAST / CONVERT function.
Recommended solution:
Use the Copy data activity to insert your data on BlobStorage / ADLS (this activity did it anyway) preferably in the parquet file format and a self-designed structure (Best practices for using Azure Data Lake Storage).
Create a permanent Snowflake Stage for your BlobStorage / ADLS.
Add a Lookup activity and do the loading of data into a table from files there, you can use a regular query or write a stored procedure and call it.
Thanks to this, you will have more control over what is happening and you will build a DataLake solution for your organization.
My own solution is pretty close to the accepted answer, but I still believe that there is a bug in the build-in direct to Snowflake copy feature.
Since I could not figure out, how to control that intermediate blob file, that is created on a direct to Snowflake copy, I ended up writing a plain file into the blob storage, and reading it again, to load into Snowflake
So instead having it all in one step, I manually split it up in two actions
One action that takes the data from the AzureSQL and saves it as a plain text file on the blob storage
And then the second action, that reads the file, and loads it into Snowflake.
This works, and is supposed to be basically the same thing the direct copy to Snowflake does, hence the bug assumption.
I am using Azure Data Factory to read data from Application Insights via REST API by passing a KUSTO query and I am trying to write the results to an Azure SQL database.
Unfortunately when I execute my pipeline I get the following error:
UserErrorSchemaMappingCannotInferSinkColumnType,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Data type of column '$['Month']' can't be inferred from 1st row of data, please specify its data type in mappings of copy activity or structure of DataSet.,Source=Microsoft.DataTransfer.Common
It seems like an error in the mapping, but from the mapping tab I am unable to specify the data type of the columns:
Can you provide me a hint?
Update, I use the copy data activity with the following rest Source:
As I understand, the copy active works well with no error, but the data not be inserted.
And for now, we're glad to hear that you have resolved the issue. I help you post these as answer to end this question:
In the end you managed to solve my issue following this
blog:https://www.ben-morris.com/using-azure-data-factory-with-the-application-insights-rest-api/
This can be beneficial to other community members.
I have a report that I generate on a weekly basis. I have the code written in SQL and I then pull all the data into excel's data model.
I then create pivot tables and dashboards in excel from that particular data.
The SQL code creates new table of the same name everytime and deletes the older version of the table. There isn't any way for me to just append the new data as the report is run from the very start and not just on the new data.
I wish to automate this process of refreshing my dashboard from the data I produce in SQL. Is there a way to do so?
Currently I create a new table in SQL, import data into the excel's data model and then recreate the dashboard.
I am not even sure if this is possible. Any help would be greatly appreciated!
Solved!
After some digging, I was able to find a feature that Excel's data model supports.
Instead of making a connection directly to a SQL Server Table, you can create a connection by writing a SQL Query.
This way, even if you delete the table for updating it, as far as the name remains the same, Excel's data model would be able to pull data from the table just by you hitting refresh!
I've imported some data into Big Query, however I can only query the table from Job History but can't seem to add it as a dataset.
What do I need to do in order to convert this as a dataset?
How I imported the data: It was done via a third party app in which had access to my Google Analytics (StitchData).
Here are some more additional import details.
From your screenshot, "Destination table" should be in format: [DATASET].[TABLE].
Also "Table Info"."Table ID" should have same info.
Guess you already have a dataset, just need a way to see it.
If so, this video may help you to locate "dataset" in BigQuery Classic UI.