How to rename a Linked Service in ADF - azure-data-factory-2

I am new to ADF and need some help please.
I created a Linked service for a sql database. I know need to make this dynamic (I know how to do this). I also want to rename the Linked service to reflect this dynamic nature. But I cannot find a way to do this.
Can someone help please? Not many hits in a google search
Thank you

I'm pretty sure you can't rename a Linked Service after it is published. If you want to dynamically create a linked service I would suggest building a small script using the SDK:s that are available like the ArtifactsClient from azure-synapse-artifacts for Python if you're running in Synapse Analytics Data Factory. You might then want to create a Linked Service for each run and tear it down after you've ran your pipelines. There should be an SDK for this in the "regular" Data Factory as well.
EDIT: Just noticed that there's a function for renaming a linked service through the mentioned API. See documentation here.

If your ADF is linked to GIT you can follow these simple steps to rename your linked Service:
Clone your ADF locally
Create a new branch (optionally)
Open ADF code in any Editor and Search and replace Linked Service name in all files (I use VSCode)
Rename Linked Service file name to new name (If not done ADF will prompt an issue)
Commit and push your changes
Verify on ADF (don't forget to switch branch)
Create a pull request to merge your changes to your main branch

Related

What's the recommended way to do database migrations with Ktor + Exposed (Kotlin)?

The Ktor or Exposed frameworks do not have any built-in support for database migrations. What's the recommended way to do this?
If you are using Ktor with Gradle I would recommend using Flyway programatically inside the entry point of your Application. This way it could easily be a part of your Continuous Delivery pipeline. You can see the Flyway docs using API here: https://flywaydb.org/documentation/api/
What I essentially do though is add the dependency (using Kotlin DSL):
implementation("org.flywaydb:flyway-core:6.5.2")
And then all you need to do is create an instance of Flyway and call migrate when you load your module:
fun Application.module() {
Flyway.configure().dataSource(/*config to your DB*/).load().migrate()
//the rest of your application
routing {
}
}
You could of course extract the creation of Flyway to your DI tool (e.g. Koin) and add some logging to show progress.
This way your DB will be migrated (if necessary) every time just before your app is started.
As for writing up the actual migration the official docs are very helpful. What you essentially need to do is:
Make sure you have the required directory for migration files - src/main/resources/db/migration by default ).
Write plain SQL in separate migration files in the above directory. The filenames need to stick to the convention too (by default: start with capital V then a number, which you will increment for each new migration, double underscore - this tricked me at the beginning ;) - and a snake_case description, e.g. V1__Create_person_table.sql)
Run the app and observe magic :D
Database tables are migrated at deployment via Flyway and scripts can be added in db/migrations folder to add new tables or execute queries like inserting data etc on server startup.
(https://github.com/arun0009/kotlin-ktor-exposed-sample-api)
Here's how Flyway works: https://flywaydb.org/getstarted/how
Download, install, and configure Flyway (See https://flywaydb.org/getstarted/firststeps/commandline)
Point it to a database, which will be the location of flyway_schema_history
Write your migration files for create table or insert data following their naming convention: V<version number>__<migration description> and run with flyway migrate
Write your repeatable migration files for create view following their naming convention: R__<migration description> and run with flyway migrate
Write your undo migration files for drop table or delete data following their naming convention: V<version number>__<migration description> and run with flyway migrate
Check your migration status with flyway info and commit your files if you're happy
Make any necessary modifications are rerun migration. Repeat and commit.
In case if this is a one time activity, you can try using a Off-the shelf utility like SQL Data Compare.
To make this happen, you would need to ensure that both the databases are accessible locally from your machine so that you can create 2 DB connections and run a comparison against them.
At the end of comparison, you can get a Auto-Generated SQL Script out of it to run against your new schema and make it sychronized.
In case if you wish to compare Schema Objects as well, again Red-Gate provides a similar Schema Compare tool, which they have now started to call as SQL Compare (God knows why !!). This utility would also provide similar auto-generated script to help you.
But, again Red Gate is good for one-time migration and you can use this with their Trial version for a period of 30 days. For similar activity in regular basis, you would need to buy the Licensed version of same software.
For Data Migrantion, I use Navicat Premium which i find very easy to use, but it is not an open source. If you are looking for an Open Source Tool, then You can use SQLines Data which is an open source (Apache License 2.0), scalable, parallel high performance data transfer and schema conversion tool that you can use for database migrations and ETL processes.
SQLines
It is available for Linux, Windows, both 64-bit and 32-bit platforms.
You can also use SQLines Data for cross-platform database migration. The tool migrates table definitions, constraints, indexes and transfers data.
This how you can start with SQLines:
Download and unzip the file, no installation is required
Run sqldataw.exe on Windows to launch the GUI version
Run ./sqldata on Linux to launch the command line tool.
And There are Migration guidelines available for specific databases.Guidelines

Deploy a mondrian schema in pentaho 5.1 without schema workbench

I have a question, in pentaho 5.1, how can I deploy a cube without using the schema workbench? I'm kind of newbie in Pentaho.
Is there a cmd line? Java code? Or something like...
Thanks a lot!
You can do that in the User Console.
There is a menu Manage Data source... There you can upload your xml and refer to a database connection for it.
First, I suppose you have installed BA Server and have made at least fact table.
In case you don't know what the fact table is, or someone else is reading this answer, you can find brief explanation here.
Of course, it's better to have full Star Schema. You cannot create Snow Flake inside Pentaho User Console. You can create it with Pentaho Schema Workbench or by manually edit mondarian.xml.
Make sure that your JDBC driver is inside BA Server driver directory. Look! Open Pentaho User Console. It's by default at localhost:8080/pentaho or yourdomain.name:8080/pentaho and login as administrator
File -> New -> Data Source
Choose Data source
Type Choose fact table and define connections to dimensions (if exists)
Choose to modify cube on the end of data source wizard
.

Add tables to the database on cloudbees or run .sql file for database in cloudbees

I have an AppFuse struts 2 app. I am trying to deploy it on cloudbees. I have created a database and bound it with the DB but I am not sure how to add tables to the database on cloudbees. Is there a way I can run .sql script on database created in cloudbees?
Also when I try to run the app using link it gives error Requested resource not available. I am guessing its because of lack of DB . Can anyone help me on adding tables and data to DB? and also execute the app smoothly on cloudbees?
Thanks a ton for your help.
You can add the tables and populate the data using different ways. You have here an article which explains you how to do it.
From your application, using Spring, you can use something like this. You need to figure it out for other frameworks.
<!-- Database initializer. If any of the script fails, the initialization stops. -->
<jdbc:initialize-database data-source="dataSource">
<jdbc:script location="${jdbc.initLocation}"/>
<jdbc:script location="${jdbc.dataLocation}"/>
</jdbc:initialize-database>
If you plan to use a MySQL client, then you should take a look at this, which explains it step by step.
Regarding how to deploy and to bind a Tomcat 7 app with your database, you can take a look at this blog post.
You can find info about all the containers we support here.

Cloud bees, create db's tables

After I've done developing my web site using Hibernate and Struts2, now I want to put it on Cloudbees hoster, the problem is that I don't know how to create my data base's tables?!!
I used the method shown in this video to create my tables: http://vimeo.com/33445098 But it doesn't work!!
It shows how to connect to my server by using MySQL WorkBench, when I created new Server instance the connection couldn't be established, although I used the appropriate informations.
Thank very much :)
After days of looking around on the web I found this, it's very useful
http://www.youtube.com/watch?v=oWE6s_FQBwA

SQL table not created when deploying a WAR file to Liferay

I've created a JSR-268 portlet for Liferay which uses services to interact with a database. I can deploy the portlet without problems or errors, but the table defined by the services is not created!
I get no "table not found" error when I test the portlet. I get no errors at all! The table just isn't there in the database. I've found other things on the internet saying that I should use the generated "create.sql" file that the Liferay Service Builder created, but I don't see that file anywhere.
Can someone help me out?
Have the tables never been created? I had a similar problem when I deleted the tables by hand. I thought they would be created again when deploying the portlet again, but it didn't happen.
After studying the source code I found out that Liferay stores information about the portlets in the table servicecomponent and checks 2 things before it executes the (pseudo) SQL in META-INF/tables.sql:
The build.number in service.properties must be higher than that, stored in servicecomponent,
The tables.sql must be different from the one stored in servicecomponent.
Only then the tables.sql is executed.
An easy way to achieve that, is to delete all entries in servicecomponent addressing your portlet.
Try by deleting an entry from servicecomponent table. And redeploy your portlet.
delete FROM servicecomponent where buildNamespace="<your table namespace>"
I have faced the same problem and below given solution works for me.
Steps
I have changed the build.number in service.properties higher than that stored in servicecomponent.
Removed all the xml files from META-INF folder.
Removed all the sql files from webapp/WEB-INF/sql folder.
Then build-service and deploy will creates custom tables that are defiend in service.xml file.
If you're using serviceBuilder in Liferay 6.2 when you define a new entity in the Liferay schema in service.xml, the first time that any Portlet tries to access the table it will be created if it's not exist.
There are several problems when you're using a different schema because sometimes Liferay not create tables automatically. Then you need to lauch the Creation SQL sentence and create it manually and then access it through LocalServiceUtil.