I am new to UniVerse. I am working with an existing Database. I would like to know how to View a List of all available TABLES in the database. Is there a simply syntax command to view all TABLES?
Type "F" would only return results of physical files in that account. Type "Q" are Q-Pointers that point to physical files in other accounts. This is a way to have one file, but several accounts point to that file.
At TCL:
LIST VOC WITH F1 = "F]"
This will list all file/table entries registered in a specific account (database) on the database server.
If you're looking for all accessible files, including files that may be in other accounts (databases) on the server, use the following query:
LIST VOC WITH F1 = "F]" OR F1 = "Q]"
USE THIS SINTAXIS
a)... THIS IS FOR CONTENTS
LIST (NAME OF DATABASE)
b) ... THIS IS FOR DICTIONARY
LIST DICT (NAME OF DATABASE)
Related
Environment:
MS Azure:
Blob Container, multiple csv files saved in a folder. This is my source.
Azure Sql Database. This is my target
Goal:
Use Azure Data Factory and build a pipeline to "copy" all files from the container and store them in their respective tables in the Azure Sql database by automatically creating those tables.
How do I do that? I tried following this but I just end up having tables incorrectly created in the database, where table is created with a single column having same name as the table name.
I believe I followed the instructions from that link pretty must as they are.
My CSV file is as follows, one column contains the table name.
The previous steps will not be repeated,it is the same as the link.
At Step3 inside the Foreach activity, we should add a Lookup activity to query the table name from the source dataset.
We can declare a String type variable tableName pervious, then set the value via expression #activity('Lookup1').output.firstRow.tableName.
At sink setting of the Copy activity, we can key in #variables('tableName').
ADF will auto create the table for us.
The debug result is as follows:
I Googled for a solution to create a table, using Databticks and Azure SQL Server, and load data into this same table. I found some sample code online, which seems pretty straightforward, but apparently there is an issue somewhere. Here is my code.
CREATE TABLE MyTable
USING org.apache.spark.sql.jdbc
OPTIONS (
url "jdbc:sqlserver://server_name_here.database.windows.net:1433;database = db_name_here",
user "u_name",
password "p_wd",
dbtable "MyTable"
);
Now, here is my error.
Error in SQL statement: SQLServerException: Invalid object name 'MyTable'.
My password, unfortunately, has spaces in it. That could be the problem, perhaps, but I don't think so.
Basically, I would like to get this to recursively loop through files in a folder and sub-folders, and load data from files with a string pattern, like 'ABC*', and load recursively all these files into a table. The blocker, here, is that I need the file name loaded into a field as well. So, I want to load data from MANY files, into 4 fields of actual data, and 1 field that captures the file name. The only way I can distinguish the different data sets is with the file name. Is this possible? Or, is this an exercise in futility?
my suggestion is to use the Azure SQL Spark library, as also mentioned in documentation:
https://docs.databricks.com/spark/latest/data-sources/sql-databases-azure.html#connect-to-spark-using-this-library
The 'Bulk Copy' is what you want to use to have good performances. Just load your file into a DataFrame and bulk copy it to Azure SQL
https://docs.databricks.com/data/data-sources/sql-databases-azure.html#bulk-copy-to-azure-sql-database-or-sql-server
To read files from subfolders, answer is here:
How to import multiple csv files in a single load?
I finally, finally, finally got this working.
val myDFCsv = spark.read.format("csv")
.option("sep","|")
.option("inferSchema","true")
.option("header","false")
.load("mnt/rawdata/2019/01/01/client/ABC*.gz")
myDFCsv.show()
myDFCsv.count()
Thanks for a point in the right direction mauridb!!
I am using Azure Machine Learning to clustering data.
The input data is from an Azure SQL Database, and it works fine.
At the end of everything I want to write the output to a table in the same Azure SQL Database, but I get this error:
Error: Error 1000: AFx Library library exception:
Sql encountered an error: Login failed for user
Anyone any idea?
Thank you very much!
Please follow the instructions and examine the examples provided here to properly use the Export Data module to save the data of ML to Azure SQL Database.
How to Export Data to an Azure SQL Database
Add the Export Data module to your experiment. You can find this module in the Data Input and Output group in the experiment items list in Azure Machine Learning Studio.
Connect it to the module that produces the data that you want to export to Azure SQL DB.
For Data destination, select Azure SQL Database. This option supports Azure SQL Data Warehouse as well.
Set the following options specific to Azure SQL Database or Azure SQL Data Warehouse.
Database server name
Type the server name that is generated by Azure. Typically it has the form <generated_identifier>.database.windows.net.
Database name
Type the name of a database on the server you just specified.The database must already exist; the Export Data cannot create it.
Server user account name
Type the user name of an account that has access permissions for the database.
Server user account password
Provide the password for the specified user account.
Comma-separated list of columns to be saved
Type the names of the columns in the experiment that you want to write to the database.
Data table name
Type the name of the table where data will be stored.
For Azure SQL Database, if the table does not exist, it will be created. For Azure SQL Data Warehouse, the table must already exist and have the correct schema, so be sure to create it in advance.
Comma-separated list of datatable columns
Type the names of the columns as you wish them to appear in the destination table. The columns should correspond in order with the column names that you list in Comma-separated list of columns to be saved.
if you are writing to Azure SQL Data Warehouse, the columns names must match those already in the destination table schema.
Number of rows written per SQL Azure operation
Indicate how many rows should be written to the destination table in each batch. By default, the value is set to 50, which is the default batch size for Azure SQL Database. However, you should increase this value if you have a large number of rows to write.
TIP:
For Azure SQL Data Warehouse, we recommend that you set this value to 1. If you use a larger batch size, the size of the command string that is sent to Azure SQL Data Warehouse can exceed the allowed string length, causing an error.
If you don't want to write new results each time you run the experiment, select the Use cached results option. If there are no other changes to module parameters, the experiment will write the data the first time the module is run, and thereafter not perform writes.
However, a write will always be performed if any parameters have been changed in Export Data that would change the results.
Run the experiment.
Find the issue!
I needed to create an specific user with this SQL code:
CREATE USER AMLApplicationUser WITH PASSWORD = '************';
and then add the user to these roles on the database I want to write.
ALTER ROLE db_datareader ADD MEMBER AMLApplicationUser;
ALTER ROLE db_datawriter ADD MEMBER AMLApplicationUser;
I guess only the datawriter role is enough, but I needed datareader too.
So in conclusion, seems that database admin role can be used to read data, but not to write data from AML.
Thank you for your help!
I've never touched PervasiveSql before and now I have a bunch of .ddf and .Btr files. I read that all I had to do was create a new database in the control center and point to the folder that contains these files.
When I do this and look at the database there is nothing in it. Since I am new to Pervasive, I'm more than likely sure that I'm doing something wrong.
EDIT: Added a screen shot after running command prompt
To create a database name in the PCC, you need to connect to the engine then right click the engine name and select New then Database. Once you do that, the following dialog should be displayed:
Enter the database name, and path. The path being where the DDFs are located. In most cases the default options are sufficient.
A longer process is documented at http://docs.pervasive.com/products/database/psqlv11/wwhelp/wwhimpl/js/html/wwhelp.htm#href=uguide/using.02.5.html.
If you pointed to a directory that had DDF files (FILE.DDF, FIELD.DDF,and INDEX.DDF) when you created the database name, you should see tables listed.
If you pointed to a directory that does not have DDF files, the database will still be created but will have no tables defined. You'll either need to get DDFs from the vendor or create the table entries using CREATE TABLE (with IN DICTIONARY clauses) or use DDF BUilder to add table entries.
Based on your screen shot, you only have 10 records in FILE.DDF. This is not enough. There are minimum system tables required (X$FILE, X$FIELD, X$INDEX, and a few others). It appears your DDFs are not a valid set. Contact the client / vendor that provided the DDFs and ask for a set that include all of the table definitions.
Once you have tables listed in your Database Name, you can use ODBC to access the data.
I load 10 tables from an ACCESS 2007 file database. IS their a way I can get the column names into the dataset or do I have to rename each column? I am using Visual Studio 2008 in VB.NET.
I would like to reference the columns in the code by the name and not have to use an index
I just went ahead and added the following code for each of the tables:
Table1.column(0).name = "NAME"
Table1.column(1).name = "ADDRESS"
Table1.column(2).name = "CITY"
Table1.column(3).name = "STATE"
But I do this for 10 tables so it can add up fast on the lines of code. But if their is a way for me to read it from the file I would like to replace all the lines of code with one line of code.
When you create the dataset, if you provide a connection string locally, when you generate the xsd file it should automatically type out any tables you add to the set. As long as you're using the tableadapters built into .Net, you can change the connection string of any tableadapter you use for your local config when you deploy.
When you right-click in the blank space of the new dataset you create (in the designer), you can add->tableadapter. It'll ask you for the connetion string to the database (provide it), and then you can submit a query. Start with the a general query for the basic table you're looking for. You can then add subsequent queries as you need them to the same table or create custom tables (and names) based on joins/other queries.
EDIT: I'm assuming some version of .Net is being used here.