I have a report that I need to run everyday # 00:00 and export all the information from the table to a specific location with a specific name.
Example:
select * from my_table
where date between SYSTIMESTAMP -2 and SYSTIMESTAMP -1
and to export this to file date.xml.
Is this possible from Oracle SQL Developer or do I need other tools?
No Oracle version so I assume 10 or 11.
To schedule your process you just have to create a job and schedule it. The job has to run your script (which can be a function or a stored procedure).
Here is the documentation:
http://docs.oracle.com/cd/B28359_01/server.111/b28310/scheduse.htm#i1033533
To write to a file you can use the spool command in SQL. Here you can find the documentation: http://docs.oracle.com/cd/B19306_01/server.102/b14357/ch12043.htm
It's really simple to use.
spool #path/nomefile
la tua query
spool off
Obviously, the machine from which you run the script must have write permissions on the machine where you're going to write the file (I say this because I often forget to check out).
To create an XML is slightly more complex and a little long to explain here but there is a nice post on the community of Oracle that explains it and makes a simple and practical example: https://community.oracle.com/thread/714758?start=0&tstart=0
If you do not want to use a job in Oracle you can write a .Sql file with the connection commands, the spool command and your query and schedule it on the server machine on which you intend to set as a simple command sqlplus.
Related
I need to run the same damn Oracle query daily (and export the output, which I can do from the sql file itself). I'd like to automate using Windows Task Scheduler but it only opens the script, and it doesn't run it.
Is this feasible or is there an easier way?
Your description isn't very descriptive (we don't know how exactly you did that), but - here's a suggestion:
create a .bat script (let's call it a.bat) which will call your .sql (let's call it a.sql) script. Only one line in it:
sqlplus scott/tiger#orcl #a.sql
a.sql is ... well, what it is, for example
prompt Number of rows in EMP table
select count(*) from emp;
Create a job in Task Scheduler which will start a program (a.bat).
I've just tried it (using that simple example I posted above) and got the result (saying that there are 14 rows in the table), so I hope you'll get it as well.
I work in a company, and i need to export one request SQL to CSV every month. I need to register this file in folder on the server of the company. I work with Oracle (sql developer).
Is this posible ?
Do you have any ideas or a way to resolve my problem ?
"Every month" leads to a scheduled job - use DBMS_SCHEDULER package to create it (or, if you're on older database versions, see DBMS_JOB).
"CSV file" leads to usage of a stored procedure and UTL_FILE package.
At the end, you'd have a job which periodically calls the stored procedure which - using UTL_FILE - creates a CSV file in a directory on the database server.
You can configure your Oracle Server(NOT client) to SPOOL with tool UTL_FILE.
If you want to do it more faster, you can do it with command SPOOL into .SQL file.
There are some specifications like: you must to redirect the output to > /dev/null for example, but it is more eazy.
I am using a LinkedServer in SQL 2012 and refreshing a table from Oracle 9G using below procedure on daily basis. The current records in the table is 15M and it is increasing every day by 2-3K new records and the old records are also deleting and updating randomly. It takes 7-8 hours to complete this job overnight.Considering the table is already optimized on index level at Oracle side, What can be the most efficient way to attempt this?
My current process is below :
Truncate table SQLTable
Select * into SQLTable from openquery (LinkedServerName,'Select * from OracleTable')
It doesn't make sense to truncate 15M rows just for 3000-8000 rows changes.
I would consider using an ETL tool like https://sourceforge.net/projects/pentaho/. You can start with a free community edition.
This tool provides a Spoon tool that basically provides graphical interface to create a workflow. With the Pan tool you can execute the file you create using spoon tool. Basically create a batch file with Pan command and provide .ktr file as an argument. Now, this batch file you can schedule using windows task manager or Unix CRON Job.
With this, you can create a workflow, which can look for changes and only insert or update changes.
So I'm working with a professor who wants me to create a SQL Database containing information from csv files (from the New York City Department of Transportation). I've written program that takes the csv file and converts it into the appropriate sql commands. So my question is, how do i automate the database so that every 5 minutes or so a program downloads the new csv file, runs it through my csv-to-SQL command program, and then enters the output from my csv-to-SQL command program into terminal (what I use to interface with my SQL database)? Is there a specific language I should look into, I've seen people talk about cron?
cron is a reference to making a scheduled task under Unix; the Windows equivalent is to set up a task to run using a Task Scheduler.
I don't know that there's a pure SQL answer to your problem--probably the way I'd approach it is by writing a simple Import program in the language of your choice, compiling it down to an .EXE, then setting up a Task Scheduler command to run the program every 5 minutes. (Alternately, you could leave the app up all the time and simply let a timer execute every 5 minutes to trigger the import).
The simplest thing you could do is loop your script.
If you're running PHP you could do something like:
$running = true;
while ($running)
{
// Your code that gets and converts CSV
// and then saves it in SQL.
$running = $some_error ? false : true;
sleep(5000);
}
I don't know what you're using but mind the logic, not the language.
What you can do is use Sql Server Integration Services (SSIS). It's basically a workflow package built into Sql that will let you handle tasks like this. You should be able to import the CSV into a temp table and then run your appropriate queries against it.
This is Azure specific but working against hosted SQL should be similar.
https://learn.microsoft.com/en-us/sql/integration-services/import-export-data/import-data-from-excel-to-sql
I need to export the data from a particular table in my database to Excel files (.xls/.xlsx) that will be located into a shared folder into my network. Now the situation is like this -
I need to use SQL SERVER Agent Jobs.
2.I need to generate a new excel file in every 2 minutes that will contain the refreshed data.
I am using sql server 2008 that doesn't include BI development studio. I'm clueless how to solve this situation. First, I'm not sure how to export the data using jobs because every possible ways I tried had some issues with the OLEDB connection. The 'sp_makewebtask' is also not available in SQL 2008. And I'm also confused how to dynamically generate the names of the files.
Any reference or solution will be helpful.
Follow the steps given below :
1) Make a stored procedure that creates a temporary table and insert records to it.
2) Make a stored procedure that read records from that temporary table and writes to file. You can use this link : clickhere
3) Create an SQL-job that execute step 1 and step 2 sequentially.
I found a better way out. I have created a SSIS(SQL Server Integration Services) package to automate the whole Export to Excel task. Then I deployed that package using SQL Server Agent Jobs. This is a more neat and clean solution as I found.