I tried to create a file using Oracle UTL_FILE.FOPEN command but i get this error.
Is there an alternative?
log_file:=UTL_FILE.FOPEN("P:\Documentation\Project Team\SA\CSR_Documentation\SHR-10500",'outputforDeleteStudentGroup.txt', 'W');
I dont want to say create or replace directory because the directory already exists.
Please suggest.
Thanks,
Sriram
CREATE OR REPLACE DIRECTORY does not create a directory at the operating system level. It creates an Oracle directory object. You want to create an Oracle directory object that points to a directory at the operating system level. You could potentially go with the very old-school approach of modifying the UTL_FILE_DIR initialization parameter to include P:\Documentation\Project Team\SA\CSR_Documentation\SHR-10500 and reboot the database but that is not something that I would recommend.
The first parameter to UTL_FILE.FOPEN should be a string which means that it should be enclosed in single quotes not double quotes.
Is the P:\ drive something that exists on the database server's file system? Or is it a network drive that gets mounted by the database server? Or is it a directory that is available to the client?
Related
I want to read a tab delimited file using PLSQL and insert the file data into a table.
Everyday new file will be generated.
I am not sure if external table will help here because filename will be changed based on date.
Filename: SPRReadResponse_YYYYMMDD.txt
Below is the sample file data.
Option that works on your own PC is to use SQL*Loader. As file name changes every day, you'd use your operating system's batch script (on MS Windows, these are .BAT files) to pass a different name while calling sqlldr (and the control file).
External table requires you to have access to the database server and have (at least) read privilege on its directory which contains those .TXT files. Unless you're a DBA, you'll have to talk to them to provide environment. As of changing file name, you could use alter table ... location which is rather inconvenient.
If you want to have control over it, use UTL_FILE; yes, you still need to have access to that directory on the database server, but - writing a PL/SQL script, you can modify whatever you want, including file name.
Or, a simpler option, first rename input file to SPRReadResponse.txt, then load it and save yourself of all that trouble.
Text file contains data like
1,'name',34
2,'name1',23
If you have Access Client Solutions, you can use the File Transfer function to upload the file.
Also can upload directly from Excel if the file is open:
https://www.ibm.com/support/pages/transferring-data-excel-using-access-client-solutions
When the Db2-server runs on Linux/Unix/Windows, you can CALL a stored procedure to do import or load.
BUT, the file to be imported or loaded must already be on the Db2-server, or on a file-system that the Db2-server process can read. So any filenames are relative to the Db2-server (not to your workstation, unless of course the Db2-server runs on your workstation).
If the target table already exists, the connected-userid needs suitable permissions on that table. If the target table does not exist, you need to create it first.
Also the userid needs execute permission on the stored procedure that does the work.
So there are three steps:
copy the file to be imported (or loaded) to a location that the Db2-server can read.
call the ADMIN_CMD stored procedure with parameters telling it what to do, in this case to import a file.
Examine the result-set of the stored procedure to see what happened. If the import or load failed, you need to run the SQL listed in the MSG_RETRIEVAL column of the result-set to see why it failed (assuming you used MESSAGES ON SERVER option to import).
See the Db2 documentation online here for import or load
There are also many examples here on stackoverflow.
So do your research and learn.
On Db2 11.5 you can use a REMOTE TABLE to import a text file into Db2
Use the REMOTESOURCE YES option if the file is on your client and not on a directory visible to the database server
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.5.0/com.ibm.db2.luw.sql.ref.doc/doc/r_create_ext_table.html?pos=2
I created a backup cmd file with this code
EXPDP system/system EXCLUDE=statistics DIRECTORY=bkp_dir DUMPFILE=FULLDB.DMP LOGFILE=FULLDB.log FULL=Y
it works good, but, when I run the backup again, it finds that the file exists
and terminate the process. it will not run unless I delete the previous file or rename it. I want to add something to the dumpfile and logfile name that creates a daily difference between them, something like the system date, or a copy number or what else.
The option REUSE_DUMPFILES specifies whether to overwrite a preexisting dump file.
Normally, Data Pump Export will return an error if you specify a dump
file name that already exists. The REUSE_DUMPFILES parameter allows
you to override that behavior and reuse a dump file name.
If you wish to dump separate file names for each day, you may use a variable using date command in Unix/Linux environment.
DUMPFILE=FULLDB_$(date '+%Y-%m-%d').DMP
Similar techniques are available in Windows, which you may explore if you're running expdp in Windows environment.
I am trying to import/ copy my csv file to PostgreSQL. However, I am encountering these errors. I don't have import/ write permissions to the file. Will stdin help and how?The Postgres docs provides no examples. I was henceforth asked to do bulk insert but since there are too many columns with mixed data types, I am not sure how to proceed with that further.
Command to copy the csv file:
COPY sales.sales_tickets
FROM 'C:/Users/Nandini/Downloads/AIG_Sales_Tickets.csv'
DELIMITER ',' CSV;
ERROR: must be superuser to COPY to or from a file
Hint: Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.
1 statement failed.
Command to do bulk insert is too time taking:
insert into sales.sales_ticket values (1,'2',3,'4','5',6,7,8,'9',10','11');
Please suggest. Thank you.
From PostgreSQL docummentation on COPY:
COPY naming a file or command is only allowed to database superusers, since it allows reading or writing any file that the server has privileges to access.
and
Files named in a COPY command are read or written directly by the server, not by the client application. Therefore, they must reside on or be accessible to the database server machine, not the client. They must be accessible to and readable or writable by the PostgreSQL user (the user ID the server runs as), not the client. Similarly, the command specified with PROGRAM is executed directly by the server, not by the client application, must be executable by the PostgreSQL user. COPY naming a file or command is only allowed to database superusers, since it allows reading or writing any file that the server has privileges to access.
You're trying to use the COPY command violating two of the requirements:
You're trying to execute the COPY command from a non-super user.
You're trying to read a file on your client machine, and have it copied to the server.
This won't work. If you need to perform such a COPY, you need to:
Copy the CSV file to the server; to a directory that can be read by the (system) user executing the PostgreSQL server process.
Execute the COPY command from a superuser account.
Alternative
If you can't do some of these, you can always use a tool such as pgAdmin 4 and use its Import/Export functionality.
See also How to import CSV file data into a PostgreSQL table?
You are an ideal case to use /COPY not COPY.
/COPY sales.sales_tickets
FROM 'C:/Users/Nandini/Downloads/AIG_Sales_Tickets.csv'
DELIMITER ',' CSV;
I'm trying to use oracle external tables to load flat files into a database but I'm having a bit of an issue with the location clause. The files we receive are appended with several pieces of information including the date so I was hoping to use wildcards in the location clause but it doesn't look like I'm able to.
I think I'm right in assuming I'm unable to use wildcards, does anyone have a suggestion on how I can accomplish this without writing large amounts of code per external table?
Current thoughts:
The only way I can think of doing it at the moment is to have a shell watcher script and parameter table. User can specify: input directory, file mask, external table etc. Then when a file is found in the directory, the shell script generates a list of files found with the file mask. For each file found issue a alter table command to change the location on the given external table to that file and launch the rest of the pl/sql associated with that file. This can be repeated for each file found with the file mask. I guess the benefit to this is I could also add the date to the end of the log and bad files after each run.
I'll post the solution I went with in the end which appears to be the only way.
I have a file watcher than looks for files in a given input dir with a certain file mask. The lookup table also includes the name of the external table. I then simply issue an alter table on the external table with the list of new file names.
For me this wasn't much of an issue as I'm already using shell for most of the file watching and file manipulation. Hopefully this saves someone searching for ages for a solution.