sas query to schedule jobs - sql

I have a job of running certain set of queries on daily basis to fetch data from my datbase. Queries are run on sas enterprise guide.
So basically I need to automate this process.Please suggest some code so that automatically at a particular time of the day, these queries are automatically run and I get my data.

Enterprise Guide help for "Scheduling projects and process flows" describes the steps needed.
Automating Projects
Scheduling projects and process flows
In SAS Enterprise Guide, you can use the Microsoft Windows Task Scheduler to schedule projects and process flows to run at a specified time or as the result of a system event. By default, when you open the Task Scheduler, a script is automatically created in SAS Enterprise Guide to run the project or process flow. When you schedule the project or process flow, the Task Scheduler creates a scheduled task that includes the script and the criteria that specify when the task should be run. The scheduled task is added to the project tree.
Note You must save the project to your local computer before you can create a scheduled task.

Related

Automating PBI datafresh refresh after ETL process

We use Talend as ETL and PBI as Dashboard tool. We have scheduled when issues must be launched, but we want to launch it after ETL loading process. Is there any way to tell to PBI it refresh dataset when ETL processes has finished?
you can use Powe- Automate to ensure automatic refresh takes place for a powerBI
you can schedule it or you can make it triggered by file or token.
you need to check which version of PBI you are using & if it include Power automate feature.

Is it possible to create cross dependency on a file?

I want to run a job based on file availability in Tivoli Workload Scheduler. Is it possible to add cross dependency on a file , when the job is local.
Not sure what you mean with "cross" dependencies in this context.
To wait for a file in Tivoli Workload Scheduler you have the following alternative:
File dependencies (a.k.a. opens). The job will wait until the
specified test check is satisfied. This is the preferred solution is your workload is scheduled and you just want to wait for a file that you know would come.
Use a job or a start condition that runs filemonitor utility. This is the best solution if you have multiple files that arrives during the day, that you want to process one-by-one or in batches, and that you want to monitor their arrivals.
EDWA file events. Create an event rule that monitors for a file creation, update, deletion and that submits your job streams (or job). This is the most dynamic solution to trigger workload dynamically based on files to process one by one and that could come or not.

Scheduling a job in pentaho 5.0

I have a job which i want to work as scheduler.
I found that there is an option for scheduling a job in pentaho 5.0 from the start step input
which has scheduling options for interval/day etc..
But my question is will the scheduling happen even if i close the pentaho i.e spoon window?
Or should it be up always?
You have to look into how to create cron-job or screen in linux for the same, means you have to start your job in back-ground so it never die.
So in windows such facility is not available so in windows your tool should be up all the time for executing batch process.
Scheduling takes place at the Data Integration server level. If you don't have a DI server running you need to schedule it using the OS scheduling facility (task manager on Windows or cron) and call the job or transformation by using kitchen or pan, resp.

Schedule a Process in C# to run 24*7

I have a business critical application which needs to run 24*7. Right now its scheduled using Windows Task Scheduler. The problem with current implementation is whenever the application stops it has to wait for 1 minute to run again.(Since one minute is the minimum time to repeat task in Windows Task Scheduler) So I am building my own task scheduler which will start the process(application) within 5 seconds of terminating the process. How should my task scheduler know if the process has terminated. Do I need to keep polling the process every second to check whether its running or not?
You should write your application as a Windows Service, not a standard application.
Among their other advantages, Windows Services give you the ability to define what happens in the event of a failure (e.g. restart application).
They are also very easy to create in C#.

What is the practice for scheduling multiple inter-dependent SQL Server Agent jobs?

The way my team currently schedules jobs is through the SQL Server Job Agent. Many of these jobs have dependencies on other internal servers which in turn have their own SQL Server Jobs that need to be run to keep their data up to date.
This has created dependencies in the start time and length of each of our SQL Server Jobs. Job A might depend on Job B finishing, so we schedule Job B a certain estimated time in advance to Job A. All of this process is very subjective and not scalable, as we add more jobs and servers which create more dependencies.
I would love to get out of the business of subjectively scheduling these jobs and hoping that the dominos fall in the right order. I am wondering what the accepted practices for scheduling SQL Server jobs are. Do people use SSIS to chain jobs together? Is there tooling already built into the SQL Server Job Agent to handle this?
What is the accepted way to handle the scheduling of multiple SQL Server jobs with dependencies on each other?
I have used Control-M before to schedule multiple inter-dependent jobs in different environment. Control-M generally works by using batch files (from what I remember) to execute SSIS packages.
We had a complicated environment hosting 2 data warehouses side by side (1 International and 1 US Local). There were jobs that were dependent on other jobs and those jobs on others and so on, but by using Control-M we could easily decide on the dependency (It has a really nice and intuitive GUI). Other tool that comes to my mind is Tidal Scheduler.
There is no set standard for job scheduling, but I think its safe to say that job schedules depend entirely on what an organization needs. For example Finance jobs might be dependent on Sales and Sales on Inventory and so on. But the point is, if you need to have job inter dependency, using a third party software such as Control-M is a safe bet. It can control jobs on different environments and give you real sense of the company wide job control.
We too had the requirement to manage dependencies between multiple agent jobs - after looking at various 3rd party tools and discounting them for various reasons (mainly down to the internal constraints relating to the use of 3rd party software) we decided to create our own solution.
The solution centres around a configuration database that holds details about processes (jobs) that need to run and how they are grouped (batches), along with the dependencies between processes.
Summary of configuration tables used:
Batch - highlevel definition of a group of related processes, includes metadata such as max concurrent processes, and current batch instance etc.
Process - meta data relating to a process (job) such as name, max wait time, earliest run time, status (enabled / disabled), batch (what batch the process belongs to), process job name etc.
Batch Instance - the active instance of a given batch
Process Instance - active instances of processes for a given batch
Process Dependency - dependency matrix
Batch Instance Status - lookup for batch instance status
Process Instance Status - loolup for process instance status
Each batch has 2 control jobs - START BATCH and UPDATE BATCH. The 1st deals with starting all processes that belong to it and the 2nd is the last to run in any given batch and deals with updating the outcome statuses.
Each process has an agent job associated with it that gets executed by the START BATCH job - processes have a capped concurrency (defined in the batch configuration) so processes are started up to a max of x at a time and then START BATCH waits until a free slot becomes available before starting the next process.
The process agent job steps call a templated SSIS package that deals with the actual ETL work and with the decision making around whether the process needs to run and has to wait for dependencies etc.
We are currently looking to move to a Service Broker solution for greater flexibility and control.
Anyway, probably too much detail and not enough example here so VS2010 project available on request.
I'm not sure how much this will help, but we ended up creating an email solution for scheduling.
We built an email reader that accesses an exchange mailbox. As jobs finish, they send an email to the mail reader to start another job. The other nice part, is that most applications have email notifications built in, so there really isn't much in the way of custom programming.
We really only built it in the first place to handle data files coming in from lots of other partners. It was much easier to give them an email address rather than setting them up with an ftp site, etc.
The mail reader app now has grown to include basic filtering, time of day scheduling, use of semaphores to prevent concurrent jobs, etc. It really works great.