Create User on Oracle 12c (pluggable) - sql

I need to do Data Mining with Oracle on a Pluggable Oracle Database.
The users a created with a c# prefix.
If i want to install the Data Mining Repository via SQL Developer the Installation a User called "ODMRSYS" without the prefix and gives me a error.
Any advice on solving this?

You're not supposed to create the ODMRSYS user. ODMRSYS user is created internally by SQLDeveloper in the data mining repository installation step. In this user all the Data Minning "installation" repositories templates and binaries are deployed. That's why you cannot create a user named ODMRSYS.
However, you need a new user which is going to be the Data Miner user. This is the user you must use for data mining and where your mined data is going to be collected. You first create this user, connect to it, and then install the Data Mining repositories (SQL Developer will create and configure ODMRSYS user by itself)
AFAIK you don't need a common user for all your pluggable databases to deploy the data mining repositories.
Maybe this guide can help you: Oracle Data Mining 17.2 OBE Series

Related

how to use Azure Synapse database templates programmatically

I can create an Azure data lake database with pre-built tables using Azure Synapse database templates from the Synapse Studio UI, but is there a way to use these templates programmatically? So far from my research I have not found a command, API, or SDK for this. Perhaps I could create the database and tables via the UI, then generate the associated spark sql creation scripts, but don't see a way how to do that either. Does anyone have any ideas on how to do either of the prior?
You can create the data lake storage, tables and data insertion programmatically using Azure SDKs. But these templates have been made available to overcome these series of manual tasks. Using these templates save your time and efforts to create an environment and sample data for your development.
Therefore, asking to deploy these templates programmatically challenging the complete concept of templates. If you want to deploy these resources manually, you can use Azure SDKs.

How mysql repository works in Pentaho User console?

Based on Pentaho guideline (https://help.pentaho.com/Documentation/8.2/Setup/Installation/Archive/MySQL_Repository) I successfully converted pentaho File based repository to MySQL database repository.
Now does anyone have any idea how MySQL repository store the data in database? It means If create new folder, new dashboard or new connection then how pentaho store this data in mysql database. Also need to know which tables is used for which purpose of data store.
Default created attached schema and tables based on mysql pentaho repository.
Please Provide any inputs or any reference material for same?
Pentaho's repository comprises three third party technologies: Jackrabbit, Hibernate, and Quartz. Reports/Jobs/Transformations and any other artifacts stored inside the Pentaho Server are generally stored in Jackrabbit. Scheduling info and triggers are stored in Quartz. And diagnostic info is stored in Hibernate (such as who accessed what reports, how long a report took to run, etc.).
None of this info is designed to be human readable directly out of the database tables. These are sort of "black box" technologies. These are third party technologies that Pentaho simply leverages for its repository functions. If you have additional questions, I'd recommend checking out the technologies themselves on their project pages.

What permission are required on the source to copy a SQL Azure database?

I need to grant permissions to a remote development team so they can copy schema changes on a database to their local dev instances. I see many posts similar to this, but they seem to focus on what is required in the destination server, rather than rights to read everything necessary on the source.
Currently, the user is in the db_datareader role and while they seem to be able to read a good portion of the table structure, configuration items such as defaults seems to be obscured, and stored proc and view definitions don't seem to be available, either.
I need the team to be able to copy from our Test/UAT instance, but I don't want them to be able to modify it. They should already have sa access to their local dev instances.
I need to grant permissions to a remote development team so they can copy schema changes on a database to their local dev instances.
I think you can using Azure SQL database Data Sync.
Data Sync is useful in cases where data needs to be kept up-to-date across several Azure SQL databases or SQL Server databases. Here are the main use cases for Data Sync:
Hybrid Data Synchronization: With Data Sync, you can keep data
synchronized between your on-premises databases and Azure SQL
databases to enable hybrid applications. This capability may appeal
to customers who are considering moving to the cloud and would like
to put some of their application in Azure.
Distributed Applications: In many cases, it's beneficial to separate
different workloads across different databases. For example, if you
have a large production database, but you also need to run a
reporting or analytics workload on this data, it's helpful to have a
second database for this additional workload. This approach minimizes
the performance impact on your production workload. You can use Data
Sync to keep these two databases synchronized.
Globally Distributed Applications: Many businesses span several
regions and even several countries/regions. To minimize network
latency, it's best to have your data in a region close to you. With
Data Sync, you can easily keep databases in regions around the world
synchronized.
Data Sync is based around the concept of a Sync Group. A Sync Group is a group of databases that you want to synchronize.
A Sync Group has the following properties:
The Sync Schema describes which data is being synchronized.
The Sync Direction can be bi-directional or can flow in only one
direction. That is, the Sync Direction can be Hub to Member, or
Member to Hub, or both.
The Sync Interval describes how often synchronization occurs.
The Conflict Resolution Policy is a group level policy, which can be
Hub wins or Member wins.
For more detail, please see Overview of SQL Data Sync.
With Data sync, you can set your Azure SQL database as Hub database, teams local dev instances as member database, set Sync Direction to 'Hub to Member'.
Then you can sync the schema changes on a database to their local dev instances manually or automatically. Reference: Tutorial: Set up SQL Data Sync between Azure SQL Database and SQL Server on-premises
Hope this helps.
GRANT VIEW DEFINITION was what I needed.
Not sure how I didn't stumble on that in my searches, but there it is.

How to manage/ track changes to SQL Server database without compare tool

I'm working on a project as an outsourcing developer where i don't have access to testing and production servers only the development environment.
To deploy changes i have to create sql scripts containing the changes to make on each server for the feature i wish to deploy.
Examples:
When i make each change on the database, i save the script to a folder, but sometimes this is not enought because i sent a script to alter a view, but forgot to include new tables that i created in another feature.
Another situation would be changing a table via SSMS GUI and forgot to create a script with the changed or new columns and later have to send a script to update the table in testing.
Since some features can be sent for testing and others straight to production (example: queries to feed excel files) its hard to keep track of what i have to send to each environment.
Since the deployment team just executes the scripts i sent them to update the database, how can i manage/ keep track of changes to sql server database without a compare tool ?
[Edit]
The current tools that i use are SSMS, VS 2008 Professional and TFS 2008.
I can tell you how we at xSQL Software do this using our tools:
deployment team has an automated process that takes a schema snapshot of the staging and production databases and dumps the snapshots nightly on a share that the development team has access to.
every morning the developers have up to date schema snapshots of the production and staging databases available. They use our Schema Compare tool to compare the dev database with the staging/production snapshot and generate the change scripts.
Note: to take the schema snapshot you can either use the Schema Compare tool or our Schema Compare SDK.
I'd say you can have a structural copy of test and production servers as additional development databases and keep in mind to always apply change when you send something.
On these databases you can establish triggers that will capture all DDL events and put them into table with getdate() attached. With that you should be able to handle changes pretty easily and some simple compare will also be easier to apply.
Look into Liquibase specially at the SQL format and see if that gives you what you want. I use it for our database and it's great.
You can store all your objects in separate scripts, but when you do a Liquibase "build" it will generate one SQL script with all your changes in it. The really important part is getting your Liquibase configuration to put the objects in the correct dependency order. That is tables get created before foreign key constraints for one example.
http://www.liquibase.org/

Creating SQL Windows login for External Domain

Problem
Is it somehow possible to create a Windows Authentication login for a SQL database without performing a check for the user at creation time?
Example
Consider ServerA that exists in our DomainA, and ServerB that exists in the customer's DomainB. Being separate companies, DomainA and DomainB never share resources. But, if we backup from ServerB and restore to ServerA, we are able to see the existing SQL logins for users from DomainB, and even modify and code against these logins. This is good, because we are able to develop the database schema on ServerA and then publish to ServerB.
But, if I want to add a new user for this database, and am working on ServerA in DomainA, the following command produces an error:
CREATE USER [DomainB\User];
Windows NT user or group 'DomainB\User' not found. Check the name again. (Microsoft SQL Server, Error: 15401)
This is bad, because we're no longer able to develop on ServerA using the same schema as ServerB.
Backstory
I'm attempting to bring our database-driven application's database schema into source control using a Visual Studio 2010 Database Project. It's important to me to make this work well enough to convince the boss not to continue using 60-GB database backups in a zip file as a means of 'Version Control' (especially since this is just for schema, and not a backup routine). VS2010 DB Projects use scripting to create/modify databases, and so they can't create WinNT users for an unknown domain. In order to get the boss's buy-off, we're going to have to be able to match the capabilities of restoring a backup, and that means being able to re-create users for domains that we don't have access to.
Using SQL Server 2008 in my case.
Note - DBProjects are best suited to managing and versioning your SCHEMA, not your data.
If you want to keep rolling backups of your SQL databases as a whole, then I'd recommend a decent backup strategy.
If you want to better manage your databases' evolving schemas, then using DBProjects may well be your best bet.
FWIW, if you reverse-engineer a DB into a DBProj, you could then run a script to replace DomainB\known-user with DomainA\known-user prior to deploying within DomainA, no?
No, because SQL needs to know the windows SID (ugly GUID) of the user at the time it's created.
Note that you can, however create a SQL or Windows User with the same name and password as your remote SQL, Machine, or Domain user, and it will be able to log in.