I have a procedure that runs queries on a few tables and manipulates the output into a clob that it returns. I need to call this procedure in a remote Database over a dblink and get the clob value that the procedure returns. I know that we cannot access non-scalar data like clob over a dblink. I know that if the clob were in a table on the remote side, I could just create a global temp table and on the local side and do a insert into my local temp table with a select over the remote table. But in my case, the clob is a manipulated output of the procedure.
Any suggestions on how I can do this?
On the remote database, create a function to wrap around the procedure and return the CLOB as its return value. Then create a view that selects from this function and exposes the CLOB as a column. You should be able to query that CLOB column through the view remotely over a database link. I know this can work as I pull CLOB data over dblinks thousands of times a day in utilities I wrote, though I do remember it taking a bit of trial-and-error to make it happy.
If you cannot get that to work, there are a number of other workarounds available. One involves a remote package presenting package-declared collection types which can be used by a remote function in that package to disassemble the CLOB into a collection of varchar2(32767) records, return that collection to the calling database, which then using remote reference #dblink to that remote package's types is able to reassemble a local CLOB from the collection contents. But this kind of heavy-handed workaround really shouldn't be necessary.
Lastly, I should at least mention that using CLOBs for structured data is not a good design choice. CLOBs should have only unstructured data, the kind that is meaningful only to humans (like log files, free-form notes, user-entered descriptions, etc..). It should never be used for combining multiple pieces of meaningful structured data that a program is meant to interpret and work with. There are many other constructs that would handle that better than a CLOB.
I think that that CLOB is to be split into chunks of varchar2(4000) and stored into a temporary table with preserve rows, so that via that DB-link you will only select from that table that contains the chunks of the CLOB and a column that indicates their order. That would mean creating a procedure in that remote DB which calls the procedure generating the CLOB, then splits that CLOB into chunks and inserts them into the global temporary table.
Related
I am trying to copy tables from one schema to another with the same Azure SQL db. So far, I have created a lookup pipeline and passed the parameters for the for each loop and copy activity. But my sink dataset is not taking the parameter value I have given under "table option" field rather it is taking the dummy table I chose when creating the sink dataset. Can someone tell how can I pass dynamic table name to a sink dataset?
I have given concat('dest_schema.STG_',#{item().table_name})} in the table option field.
To make the schema and table names dynamic, add Parameters to the Dataset:
Most important - do NOT import a schema. If you already have one defined in the Dataset, clear it. For this Dataset to be dynamic, you don't want improper schemas interfering with the process.
In the Copy activity, provide the values at runtime. These can be hardcoded, variables, parameters, or expressions, so very flexible.
If it's the same database, you can even use the same Dataset for both, just provide different values for the Source and Sink.
WARNING: If you use the "Auto-create table" option, the schema for the new table will define any character field as varchar(8000), which can cause serious performance problems.
MY OPINION:
While you can do this, one of my personal rules is to not cross the database boundary. If the Source and Sink are on the same SQL database, I would try to solve this problem with a Stored Procedure rather than a data factory.
I want to connect a Node Express API with an Oracle 11g Database which has a table with a BLOB column. I want to read it using a SQL query, but the problem is that the BLOB column can have a very long text, more than 100k characters. How can i do this?
I tried using: select utl_raw.cast_to_varchar2(dbms_lob.substr(COLUMN_NAME)) from TABLE_NAME.
But it returns 'raw variable length too long'.
I can make multiple queries in a loop and then join them if it was necessary, but I haven't found how bring just a part of the blob.
Use the node-oracledb module to access Oracle Database (which you are probably already doing, but don't mention).
By default, node-oracledb will return LOBs as Lob instances that you can stream from. Alternatively you can fetch the data directly as a String or Buffer, which is useful for 'small' LOBs. For 100K, I would just get the data as a Buffer, which you can do by setting:
oracledb.fetchAsBuffer = [ oracledb.BLOB ];
Review the Working with CLOB, NCLOB and BLOB Data documentation, and examples like blobhttp.js and the other lob*.js files in the examples directory.
You may also want to look at https://jsao.io/2018/03/creating-a-rest-api-with-node-js-and-oracle-database/ which shows Express and node-oracledb.
I have a table to be be fetched, One of the column in particular table contains HTML data stored as a CLOB, The length is '1048576'
In my ETL job, I have replaced CLOB with LongVarChar of same size (1048576) as CLOB is not defined in data Stage but the job is not working (There is no error but it stays in running stage for long without moving a single row).
Can anyone recommend solution for similar issue faced? Thanks!
I have huge create table queries (100's of Gb) which I'd like to ship throught ODBC to my db (Postgre in that case). The problem is that these queries are built from an external program, so I would like to avoid loading each query in memory to ship it by ODBC to the db. I would much prefer to indicate to the db in a (small) query to go execute that huge query directly.
That could be easy with psql, but I'd like to do it throught odbc. Is it possible ?
If you mean bulk data load, PostgreSQL has COPY command - it can read the data file on the server directly but it can not process regular SQL queries - it can load data from file in the CSV or similiar format (which you can customize as a COPY parameters).
If you're loading table from scratch nice optimizations are having plain table (without PK, FK, constraints, indexes), and executing the COPY in the transaction together with TRUNCATE table like:
BEGIN;
TRUNCATE ....;
COPY ...;
COMMIT;
I've been writing a Library management Java app lately, and, up until now, the main Library database is stored in a .txt file which was later converted to ArrayList in Java for creating and editing the database and saving the alterations back to the .txt file again. A very primitive method indeed. Hence, having heard on SQL later on, I'm considering to port my preexisting .txt database to mySQL. Since I've absolutely no idea how SQL and specifically mySQL works, except for the fact that it can interact with Java code. Can you suggest me any books/websites to visit/buy? Will the book Head First with SQL ever help? especially when using Java code to interact with the SQL database? It should be mentioned that I'm already comfortable with using 3rd Party APIs.
View from 30,000 feet:
First, you'll need to figure out how to represent the text file data using the appropriate SQL tables and fields. Here is a good overview of the different SQL data types. If your data represents a single Library record, then you'll only need to create 1 table. This is definitely the simplest way to do it, as conversion will be able to work line-by-line. If the records contain a LOT of data duplication, the most appropriate approach is to create multiple tables so that your database doesn't duplicate data. You would then link these tables together using IDs.
When you've decided how to split up the data, you create a MySQL database, and within that database, you create the tables (a database is just something that holds multiple tables). Connecting to your MySQL server with the console and creating a database and tables is described in this MySQL tutorial.
Once you've got the database created, you'll need to write the code to access the database. The link from OMG Ponies shows how to use JDBC in the simplest way to connect to your database. You then use that connection to create Statement object, execute a query to insert, update, select or delete data. If you're selecting data, you get a ResultSet back and can view the data. Here's a tutorial for using JDBC to select and use data from a ResultSet.
Your first code should probably be a Java utility that reads the text file and inserts all the data into the database. Once you have the data in place, you'll be able to update the main program to read from the database instead of the file.
Know that the connection between a program and a SQL database is through a 'connection program'. You write an instruction in an SQL statement, say
Select * from Customer order by name;
and then set up to retrieve data one record at a time. Or in the other direction, you write
Insert into Customer (name, addr, ...) values (x, y, ...);
and either replace x, y, ... with actual values or bind them to the connection according to the interface.
With this understanding you should be able to read pretty much any book or JDBC API description and get started.