We can use SQL Server link between POWER BI and SQL server to make SQL Query on a database. But If I only have CSV file how can I request some information WITH SQL QUERY. In fact we can import CSV file and (I suppose) do some query with the POWER QUERY(it's DAX language I think), but I don't want to use DAX I want to use SQL QUERY like requesting data from SQL Server.
Example of query with SQL Server: = Sql.Database("LAPTOP-P3DH07C9", "Projet Big Data", [Query="SELECT * FROM MatchComplete"])
Is it possible or I must use DAX ?
Related
I'm working on Sql server to Snowflake migration project,So i pointed ssrs reports to Snowflake data source and converting sql queries as per snowflake,but i'm not able to get how can we write queries for parameterized reports.Example select * from Student where Std_id=#id,want to convert to snowflake query.
You can use SQL variables:
https://docs.snowflake.com/en/sql-reference/session-variables.html
or Snowflake Scripting Variables:
https://docs.snowflake.com/en/developer-guide/snowflake-scripting/variables.html
I think the SQL variables would be helpful in your case, but Snowflake Scripting variables would be more similar to your SQL Server # variables.
I have an excel file reading some data from an OLAP service. The query was constructed using the GUI. How can I obtain the corresponding MDX or SQL query that excel actually communicates to the OLAP server?
You can do it through SQL server profiler. Just run the profiler on Analysis Services and run/refresh your query in Excel and you will see the MDX query in the profiler.
Right click on your pivot table -> Tools -> Show Mdx
I have to compare the table in databricks with the same table in SQL Server and populate only the missing records into databricks. Can someone help me how to connect to SQL Server with databricks , how and where to write the query that will populate the missing data.
Thanks!
You can connect to SQL server just using standard JDBC interface supported by Spark - databricks runtimes should contain the drivers for MS SQL out of box. When data read you can do anti join between SQL server data and your data in Databricks. Something like this (in Python):
jdbc_url = f"jdbc:sqlserver://{hostname}:{port};database={database}"
sql_data = spark.read.jdbc(jdbc_url, "your_table", connectionProperties)
your_data = spark.read.format("delta").load("path")
# get difference between datasets
diff = sql_data.join(your_data, <join-condition>, "leftanti")
# work with diff
For reading from SQL Server, follow instructions on how to optimize read performance, but this may depend on your actual schema.
So I have a DAX query that I have built in SSAS. But since my application like DAX, I thought pushing the set of DAX results to a database server table would be the ideal solution for my application to read.
How do I output the contents of a DAX query to a SQL Server database table? And if possible truncate the contents of the table before each run?
I'm using SQL Server 2016 if that helps.
I got around this by using a linked server to output the contents of an 'open query' to a table variable in SQL.
I want to convert sql server import wizard into normal query or tell me the sql query for importing csv file to the database.
you need a conversion here, look at following to get some possible help:
http://www.convertcsv.com/csv-to-sql.htm