I try to insert a picture to the next table:
create table Picture
(
pic BLOB,
title varchar2(30),
descript varchar2(200),
tags varchar2(100),
date_created varchar2(100),
actualdate date
);
I have a picture and 5 varchar2 paramaters. Here is the procedure where I want to insert:
create or replace procedure addKep (pic BLOB, title varchar2,descript varchar2, tags varchar2 , date_created varchar2, hiba out varchar2)
is
my_date date;
v_blob BLOB;
begin
--get actual date
SELECT TO_date
(SYSDATE, 'YYYY/MM/DD HH:MI:SS')into my_date
FROM DUAL;
INSERT INTO Picture (pic)
VALUES (empty_blob());
insert into Picture Values(pic,title,descript,tags,date_created,my_date);
--hiba:='Sikeres!';
commit;
end;
After I try to test my procedure:
declare
something varchar2(20);
BEGIN
addKep('c:\xampp\htdocs\php_web\_projekt\pic\akosfeladatok.jpg','Title','Description','tags','2020-06-15',something);
END;
But I will get the next error:
PLS-00306: wrong number or types of arguments in call to 'ADDKEP'
However, I have the same argument list
Thank you for your help
You don't pass a path to a file as a BLOB, you pass the actual bytes of the file - see Using PL/SQL how do you I get a file's contents in to a blob?
Though, I (personally) dare say your problematic code has the right idea; I'd recommend to NOT store your pictures inside your database. Store them in the file system as files and then use the paths in the DB. It's handy for all sorts of things such as being able to individually back them up, manipulate or substitute them, and you don't need to write code in order to serve them over the web; web servers already know how to serve files out of the file system but as soon as you put your picture (or any bytes data) into a DB, it becomes much harder to work with and you have to write all the code that pulls it out, and puts it back - and that's really all you can do. By storing files in your mission critical DB it means the DB now has to dedicate resources to fetching those files out any time they're needed - really files like pictures should be put on a CDN and made available close to the users who will use them/the job of storing, caching and serving them be handed off to existing technologies dedicated to the task
ps; There are a lot of reasonable arguments for and against, in https://dba.stackexchange.com/questions/2445/should-binary-files-be-stored-in-the-database - my thoughts, personally, align with Tek's
Related
I want to know if I can create a PL/SQL procedure that the number of parameters and their types changes.
For example procedure p1.
I can use it like this
p1 (param1, param2,......., param n);
i want to pass table name and data in procedure, but the attributes change for every table,
create or replace PROCEDURE INSERTDATA(NOMT in varchar2) is
num int;
BEGIN
EXECUTE IMMEDIATE 'SELECT count(*) FROM user_tables WHERE table_name = :1'
into num using NOMT ;
IF( num < 1 )
THEN
dbms_output.put_line('table not exist !!! ');
ELSE
dbms_output.put_line('');
-- here i want to insert parameters in the table,
-- but the table attributes are not the same !!
END IF;
NULL;
END INSERTDATA;
As far as I can tell, no, you can not. Number and datatypes of all parameters must be fixed.
You could pass a collection as a parameter (and have different number of values within it), but - that's still a single parameter.
Where would you want to use such a procedure?
If you need to store, update and query a variable amount of information, might I recommend switching to JSON queries and objects in Oracle. Oracle has deep support for both fixed and dynamic querying of json data, both in SQL and PLSQL.
i want to pass table name and data in procedure, but the attributes change for every table,
The problem with such a universal procedure is that something needs to know the structure of the target table. Your approach demands that the caller has to discover the projection of the table and arrange the parameters in a correct fashion.
In no particular order:
This is bad practice because it requires the calling program to do the hard work regarding the data dictionary.
Furthermore it breaks the Law Of Demeter because the calling program needs to understand things like primary keys (sequences, identity columns, etc), foreign key lookups, etc
This approach mandates that all columns must be populated; it makes no allowance for virtual columns, optional columns, etc
To work the procedure would have to use dynamic SQL, which is always hard work because it turns compilation errors into runtime errors, and should be avoided if at all possible.
It is trivially simple to generate a dedicated insert procedure for each table in a schema, using dynamic SQL against the data dictionary. This is the concept of the Table API. It's not without its own issues but it is much safer than what your question proposes.
I have a tool that applies a lot of changes to a database. Many changes concern modifying column types, sizes, etc. Is there any (possibly Oracle-specific) way to tell in advance if given ALTER TABLE change will succeed and not fail because of too long values, functional indices and so on?
With non-DDL modifications this is simple: start a transaction, execute your changes and rollback. The answer is known from whether you get an exception or not. However, DDL modifications cannot be part of transactions, so I cannot follow the same procedure here.
Is there any (possibly Oracle-specific) way to tell in advance if given ALTER TABLE change will succeed and not fail because of too long values
I would say it is not a good design when you need to create/modify database objects on the fly. Having said that, If the DDL fails, an ORA-error will be associated with it. You need to retry with required changes. Modifying a table is not a regular thing, you create a table once and then you would alter it only when there is a business need and you need to go through a release so that the application is not affected. So, I wonder how would it help you to know prior to execution whether the DDL would be successful or not? If your tool is doing these modifications, then your tool should handle it programmatically. Check the type and size of the columns before altering it.
If you are doing it using an external script, then you need to build your own logic. You could use the metadata views like user_tab_columns to check the data_type, data_size, data_precision, data_scale etc.
A small example of the logic to check for the size of a VARCHAR2 data type before issuing an ALTER statement(For demonstration purpose, I am doing this in PL/SQL, you could apply similar logic in your script or tool):
SQL> CREATE TABLE t (A VARCHAR2(10));
Table created.
SQL> DESC t;
Name Null? Type
----------------------------------------- -------- ----------------------------
A VARCHAR2(10)
SQL> SET serveroutput ON
SQL> DECLARE
2 v_type VARCHAR2(20);
3 v_size NUMBER;
4 new_size NUMBER;
5 BEGIN
6 new_size:= 20;
7 SELECT data_type,
8 data_length
9 INTO v_type,
10 v_size
11 FROM user_tab_columns
12 WHERE table_name='T';
13 IF v_type ='VARCHAR2' THEN
14 IF new_size > v_size THEN
15 EXECUTE IMMEDIATE 'ALTER TABLE T MODIFY A '||v_type||'('||new_size||')';
16 DBMS_OUTPUT.PUT_LINE('Table altered successfully');
17 ELSE
18 DBMS_OUTPUT.PUT_LINE('New size should be greater than existing data size');
19 END IF;
20 END IF;
21 END;
22 /
Table altered successfully
PL/SQL procedure successfully completed.
Ok, so the table is successfully altered, lets check:
SQL> DESC t;
Name Null? Type
----------------------------------------- -------- ----------------------------
A VARCHAR2(20)
SQL>
I have seen few applications using groovy script which does all the check and prepares the ALTER statements based on the checks on the data_type, data_size, data_precision, data_scale etc.
For different checks, you need to add more IF-ELSE blocks. It was one example to increase the size of the VARCHAR2 data type. You need to raise exception while decreasing the column size, depending whether the column has any existing data or not...and so on...
You could create separate functions to check the metadata and return a value.
For example,
Numeric types:
CREATE OR REPLACE FUNCTION is_numeric (i_col_name)...
<using the above logic>
IF v_type ='NUMBER' THEN
<do something>
RETURN 1;
Character types:
CREATE OR REPLACE FUNCTION is_string (i_col_name)...
<using the above logic>
IF v_type ='VARCHAR2' THEN
<do something>
RETURN 1;
Two approaches come to mind, neither of which really give you what you exactly want.
The first, and I mention this purely to describe what it is you actually desire and not becuasd it is practical, is to write a tool that parses your SQL scripted changes and applies the same rules to the objects as Oracle does, i.e. alter table modify column - and check if column values do not exceed the new length. This is a huge undertaking, and when you consider that changes will be cascaded/compounded you need to cater for that too. I wouldn't expect it to be quick either - if you do a modify on a non-indexed column on a table of x million rows the tool would need to scan for data that will cause the alter to fail. Whatever internal magic Oracle uses to determine this is not going to be available to this tool.
The approach that I use, again not exactly what you want, is to clone a database from production, with cut down data. I mostly do this via scripting so that I have control, and do not rely on special permissions/dba access. I then test my deployment scripts against this, and do this iteratively until I have a clean build. I use a deployment framework I built that has restart functionality, so that if a deployment fails on step 63 of 121, it gives me a retry/skip/abort option, and if I abort it can restart from the failed step. Once I am happy with my dev build, I then test on a database that is sync'd with production - this tends to iron out problems with data and/or performance.
Now, another possible way for you might be to look at flashback. I am not sure if flashback handles DDL as well, but if it does, and assuming it is enabled on your dev/test database (a big if) then that might be an avenue worth exploring.
Try my tool CORT - www.softcraftltd.co.uk/cort
It is free and open-source. Maybe you find there what you need.
I have about 500 Linux scripts. I am trying to insert the source code from each script into an Oracle table:
CREATE TABLE "SCRIPT_CODE" (
"SCRIPT_NAME" VARCHAR2(200 BYTE),
"CODE_LINE" NUMBER(*,0),
"CODE_TEXT" VARCHAR2(2000 BYTE)
)
I was using a (painful) manual Excel solution. Opening each script and pasting the code into a column. I ran into difficulties and switched gears.
I decided to change the table and place the entire source code from each script into a CLOB field….
CREATE TABLE "SCRIPT_CODE_CLOB" (
"SCRIPT_NAME" VARCHAR2(200 BYTE),
"CODE_TEXT" CLOB
)
Here is the Insert code that I wrote:
set define off;
Declare Code Clob;
Script Varchar2(100);
sql_exec varchar2(1000);
Begin
Script := 'Some Script Name'
;
Code := '
[pasted code here]
'
;
sql_exec := 'INSERT INTO SCRIPT_CODE_CLOB VALUES (:1, :2)';
EXECUTE IMMEDIATE(sql_exec) USING Script, Code;
COMMIT;
End;
This was going great until I ran into a script that had 1,700 lines of code. When I pasted all the code in and ran the script, it gave me:
ORA-01704: string literal too long
I am looking for a better way of doing this. Is it possible to Import the files somehow and automate the process?
There are some external tables in the Oracle database, and I can get to the folder location that they point to.
Thanks very much for any assistance.
- Steve
Environment:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
Oracle SQL Developer Version 4.0.2.15, Build 15.21
In order to insert into the clob, you need to use the DBMS_LOB functions (specifically DBMS_LOB.WRITE) rather than reading it into a variable and passing that directly into your insert statement. Check out the documentation for the package. You'll need to read the data into a buffer or temporary lob and then use that in the insert.
Oracle 11gR2 (x86 Windows):
I have a db with 250 tables with indexes and constraints. I need to re-create these tables, indexes and constraints in a new db and load the data. I need to know how to do the following in SQL Plus and/or SQL Developer, unless there's a magical utility that can automate all of this. Thanks in advance!
Unload (export) all the data from the 250 tables.
Create an sql script file containing the CREATE TABLE statements for the 250 tables.
Create an sql script file containing the CREATE INDEX statements for the 250 tables.
Create an sql script file containing the ALTER TABLE ADD CONSTRAINT statements for the 250 tables.
Run the script to create the tables in a new db.
Load the exported data into the tables in the new db.
Run the script to create all the indexes.
Run the script to add all the contraints.
EDIT: I'm connected to the remote desktop which links to the source db on a Windows Server 2008. The remote only has an Oracle client installed. For security reasons, I'm not allowed to link directly from my local computer to the Win Server, so can I dump the whole source db to the remote then zip it to my local target machine? I'm trying to replicate the entire db on my computer.
Starting from Oracle 10g, you could use the Data Pump command-line clients expdb and impdb to export/import data and/or schema from one DB to an other. As a matter of fact, those two command-line utilities are only wrappers that "use the procedures provided in the DBMS_DATAPUMP PL/SQL package to execute export and import commands, using the parameters entered at the command line." (quoted from Oracle's documentation)
Given your needs, you will have to create a directory then generate a full dump of your database using expdb:
SQL> CREATE OR REPLACE DIRECTORY dump_dir AS '/path/to/dump/folder/';
sh$ expdp system#db10g full=Y directory=DUMP_DIR dumpfile=db.dmp logfile=db.log
As the dump is written using some binary format, you will have to use the corresponding import utility to (re)import your DB. Basically replacing expdb by impdb in the above command:
sh$ impdp system#db10g full=Y directory=DUMP_DIR dumpfile=db.dmp logfile=db.log
For simple table dump, use that version instead:
sh$ expdp sylvain#db10g tables=DEPT,EMP directory=DUMP_DIR dumpfile=db.dmp logfile=db.log
As you noticed, you can use it with your standard user account, provided you have access to the given directory (GRANT READ, WRITE ON DIRECTORY dump_dir TO sylvain;).
For detailed usage explanations, see
http://www.oracle-base.com/articles/10g/oracle-data-pump-10g.php.
If you can create a database link from your local database to the one that currently contains the data, you can use the DBMS_DATAPUMP package to copy the entire schema. This is an interface to Datapump (as #Sylvain Leroux mentioned) that is callable from within the database.
DECLARE
dph NUMBER;
source_schema VARCHAR2 (30) := 'SCHEMA_TO_EXPORT';
target_schema VARCHAR2 (30) := 'SCHEMA_TO_IMPORT';
job_name VARCHAR2 (30) := UPPER ('IMPORT_' || target_schema);
p_parallel NUMBER := 3;
v_start TIMESTAMP := SYSTIMESTAMP;
v_state VARCHAR2 (30);
BEGIN
dph :=
DBMS_DATAPUMP.open ('IMPORT',
'SCHEMA',
'DB_LINK_NAME',
job_name);
DBMS_OUTPUT.put_line ('dph = ' || dph);
DBMS_DATAPUMP.metadata_filter (dph,
'SCHEMA_LIST',
'''' || source_schema || '''');
DBMS_DATAPUMP.metadata_remap (dph,
'REMAP_SCHEMA',
source_schema,
target_schema);
DBMS_DATAPUMP.set_parameter (dph, 'TABLE_EXISTS_ACTION', 'REPLACE');
DBMS_DATAPUMP.set_parallel (dph, p_parallel);
DBMS_DATAPUMP.start_job (dph);
DBMS_DATAPUMP.wait_for_job (dph, v_state);
DBMS_OUTPUT.put_line ('Export/Import time: ' || (SYSTIMESTAMP - v_start));
DBMS_OUTPUT.put_line ('Final state: ' || v_state);
END;
The script above actually copies and renames the schema. If you want to keep the same schema name, I believe you'd just remove the metadata_remap call.
SQL Developer can help with #1 by creating INSERT statements with a formatted query result:
Select /*insert*/ *
from My_Table;
I'm interested is it possible with PL/SQL block to transfer the content of a Oracle table into text file on the Hard Drive. I need a PL/SQL block which can download the content of a table witch will be used to store log data into text file.
Regards
you can use UTL_file package for this..
you can try below type of block --
declare
p_file util_file.file_type;
l_table <your_table_name>.ROWTYPE;
l_delimited varchar2(1) := '|';
begin
p_file:= utl_file.fopen('<file_path>','<file_name>','W');
for l_table in (select * from <your_table_name>) loop
utl_file.putline(p_file,l_table.col1||l_delimited||l_table.col2||l_delimited||l_table.col3||l_delimited||l_table.col4||l_delimited <continue with column list .........> ||chr(10));
end loop;
utl_file.fclose_all();
end;
pratik garg's answer is a good one.
But, you might want to consider also the use of an EXTERNAL TABLE.
Basically, it's a table which is mapped to a file. So every row inserted to the table is automatically written to a file.
you can see an example here