Trying to change bigquery dataset and table name in data studio data source connection with a parameter - google-bigquery

I build a data studio report, with a Bigquery data source custom query that fetches data from a dataset and table name. I want to pass a parameter to the connector and use different datasets and table names, so the visualisation could show data from different tables, without duplicating the report for each table.
When I've tried to use a parameter in the SQL query for the table name, like this:
select id, name from #tablename
i got:
Query parameters cannot be used in place of table names

Related

How Paramterize Copy Activity to SQL DB with Azure Data Factory

I'm trying to automatically update tables in Azure SQL Database from another SQLDB with Azure Data Factory. At the moment, the only way to update the table Azure SQL Database is to physically select the table you want to update in Azure SQL Database, as shown here:
My configuration to automatically select a table the SQLDB that I want to copy to Azure SQL Database is as follows:
The parameters are as follows:
#concat('SELECT * FROM ',pipeline().parameters.Domain,'.',pipeline().parameters.TableName)
Can someone let me know how to configure my SINK and/or connection to automatically insert the table selected from SOURCE.
My SINK looks like the following:
And my connection looks like the following:
Can someone let me know how to configure my SINK and/or connection to
automatically insert the table selected from SOURCE.
You can use Edit option in the SQL dataset.
Create a dataset parameter for the sink table name. In the SQL sink dataset check the Edit checkbox in it and use the dataset parameter. If you want, you can use dataset parameter for the database name also. Here I have given directly (dbo).
Now in the copy activity sink, you can give the table name dynamically from any pipeline parameter (give your parameter in this case) or any variable using the dynamic content.
Also, enable the Auto create table which will create new table if the table with the given name not exists and if it exists it ignores creation and copies data to it.
My sample result:

Impala: copy a table which has a complex type

I need to dynamically create a filtered list of tables in one db to another. The script to do this works fine, but I am creating the tables using CREATE TABLE AS SELECT ... and this fails for tables with complex types such as array.
So how can I create a table in impala based on an existing table which has a complex type?

Table name is getting appending with column names in resultent file in azure datafactory

I was trying to get data from On-prem hive Source to Azure data lake gen 2 using azure data factory.
As I need to get data for multiple tables I have created and file(ex: tnames.txt) with all my table names and stored in data lake gen 2.
In Azure Data Factory created a lookup activity and passed tnames.txt file to it.
Then added a foreach activity to that lookup actvity and in foreach activity added a copy activity.
In copy activity in source, I was giving query to extract data.
Sink is datalake gen 2.
Example code:
select * from tableName
Here table is dynamically passed from tnames.txt.
But after data is copied into data lak,e I am getting headers in copied data are like:
"tablename.columnname".
For example: Table name is Employee and few columns are ID, Name, Gender,....
My resultent file columns are like Employee.ID,Employee.Name,Employee.Gender, but my requirement is just column name.
Basically tabe name is append to column name.
How to solve this issue/Is there any other way to get data for multiple tables in single pipeline/copy activity?
Check the mapping tab of your copy activity . If the mapping is enabled, clear it and use auto-create table . It will auto-generate the schema according to the source schema. No need to explicitly create the table with defined schema. Let it be auto create table. It will generate required mapping automatically.

Declare variable in template table

I am writing an ETL to extract data from HANA table and load into SQL Server in BODS.
My job is to create a new table on SQL Server every time I run my job with name as date of that day. I know we can do that for flat files by using global variable but not sure how we can declare similar variable in template table to get desired results?
Why you want to use template tables. You can do the same as below:
Load the data in a standard staging table using BODS
Using DS scripting mechanism generate a query to create a table
Execute the query using SQL transform
Generate another query to copy data from staging table to the table created above
Several other ways also like you can write a DB procedure to create a table with the desired name and copy over the data from stage to that table. This procedure you can call from DS.
Hope this helps.
Cheers.
Shaz

Copy database schema to an existing database

I'm using Microsoft Sql Server Management Studio.
I currently have an existing database with data in it, which I will call DatabaseProd
And I have a second database with data used for testing, so the data isn't exactly correct nor up to date. I will call this database DatabaseDev.
However DatabaseDev now contains newly added tables and newly added columns,etc etc.
I would like to copy this new schema from DatabaseDev to DatabaseProd while keeping the DatabaseProd's Data.
Ex.
DatabaseProd contains 2 tables
TableA with column ID and Name
TableB with column ID and jobName
and these tables contains data that I would like to keep
DatabaseDev contains 3 tables
TableA with column ID ,Name and phoneNum
TableB with column ID and jobName
TableC with column ID and document
and these tables contains Data that I dont need
Copy DatabaseDev Schema to DatabaseProd but keep the data from DatabaseProd
So DatabaseProd after the copy would look like this
TableA with column ID ,Name and phoneNum
TableB with column ID and jobName
TableC with column ID and document
But the tables would contain it's original Data.
Is that possible?
Thank you
You can use Red-Gate SQL Compare, this will allow you to compare both DB's and generate a script to run on the source DB. You have to pay for a license, but you will get a 14-day trial period.
This tool, along with Data Compare and two tools I always insist on with new roles as they speed up development time, and minimise human error.
Also, a good tip when using SQL compare - if you need to generate a rollback script, then you can edit the project (after creating your rollout script), switch the source and destination around and this will create a script which will return the schema back to it's original state if the rollout script fails. However, be very careful when doing this, and don't select synchronize with sql compare, rather generate a script, see image. I can't upload an image, but I have linked to one here - you can see the two options to select Generate Script / Sync using SQL compare.
Yes, you can just generate a database script which is just for schema only no data will added to that script.
Also you need to just select the third table while creating or generating the database script and run that script to your production server database it will create a new table (table 3 in your case) without any data.
For more information about how to create a database script please follow the below link:
http://blog.sqlauthority.com/2011/05/07/sql-server-2008-2008-r2-create-script-to-copy-database-schema-and-all-the-objects-data-schema-stored-procedure-functions-triggers-tables-views-constraints-and-all-other-database-objects/
You need an ALTER TABLE statement
ALTER TABLE tableA ADD PhoneNum Varchar(10) --Insert variable of choice here
Looked like no changes to TableB
Add TableC
CREATE TABLE TableC (ColumnID int, Document Varvhar(50))
DO you need to copy constraints, Indexes or triggers over?