Execute job spoon with software - jobs

I have a JOB done in SPOON, which is executed without problems in the command line, but I would like to know if there is any software in which I can execute these JOBS and go to see the execution visually. The idea is that for the most pleasant exploitation area these tasks are executed.

You have two solutions:
Carte:
Use the carte server which is shipped with the PDI. Install the PDI on any server, launch carte (specifying the port), then you can execute/view/stop/restart job/transformation from any browser. Documentation is here.
Of course you can launch a job/transformation from your own PDI. Just define a new Slave server, on the left panel, tab view, default username/password = cluster/cluster. Then each time you run a job/transformation, choose the carte server, instead of Pentaho/local in the Run configuration.
Loggin
If you just want to follow job/transformation, you may use the database logging: Right-click any where, Parameters, Logging, Job/Transformation, then define a database, a table and a logging interval of 2 seconds.
Then every two seconds, the line_read, line_written, errors, and log_field are written to a database. This database can be read by an external process and displayed on the screen or on a browser.
This method is used in the github/ETL-pilot which uses a tomcat (because you probably have a tomcat already running with a Pentaho server), but can easily be adapted to a nodejs or any other server. (If you do it and OpenSource it, please add a link to your work on our github).

Related

SSIS flat file folder permission error when NOT running from SQL Server Agent

Setup: A pretty standard data export SSIS package (SQL Server 2016 compatible), created in VS2019/Data Tools and deployed using the SSIS Project Deployment model to the Integration Services Catalog of a SQL Server 2016 instance. The package creates files in a network folder before sending the file out via FTP and putting a copy of the file in a Sent folder.
The project requirements include having the package running on a schedule using "default" parameter values, as well as allowing users to manually run the package using "non-default" parameter values from within a stand-alone application.
Current behavior: the package behaves correctly when run from a SQL Server Agent Job that is configured with a SQL proxy and credentials mapped to a domain login with the proper permissions for the network folder.
Problem: the Data Flow task fails to create the file with a "Cannot open the datafile" error when running the package directly using any of the following methods (even when the "current" session is using the same credentials as the SQL Server Credentials/Proxy used by the SQL Server Agent Job):
Using SSMS to right-click on the package and selecting Execute
Using the DTEXEC SQL utility
Using the SSISDB.catalog.start_execution SQL Server stored procedure
As far as I'm aware, these are the only methods capable of starting a SSIS package and changing the package's parameter values. I either need to get one of the latter 2 methods to work, find another option that allows for changing the parameter values while launching the package, or use one of 2 techniques I'm aware of (detailed below) that would add yet another failure point to the process as well as other potential issues.
Note: If the process is changed to initially create the file on the SQL Server's local harddrive, then the Data Flow task succeeds, but the later copy to Sent folder task fails with a very similar permissions error.
Alternative #1: this technique requires creating a new table, loading the parameter values to the table, changing the package to check the table and potentially set it's parameters/variables based on what it finds. The package can then be launched using a SQL Server Agent Job (for which there are multiple methods to manually launch them) and if the calling object has correctly populated the table, the package will behave as if it's parameters were changed at runtime otherwise it will run with the default values.
Alternative #2: Change all folders used by the package to point to folders local to the SQL Server instance and then create a separate scheduled task/application/whatever, with the valid credentials, that would synchronize or move the files to their proper network folders.
even when the "current" session is using the same credentials as the SQL Server Credentials/Proxy used by the SQL Server Agent Job
This is probably because the account is not logged on locally at the SQL Server, and so it's a Double-Hop Impersonation scenario, and would require Kerberos Constrained Delegation to be configured.
And you are correct in assessing the options. The general solution is to invoke catalog.start_execution from a session running on the SQL Server, and an Agent Job is the simplest built-in way to do this (the others being xp_cmdshell, Service Broker Activation, or SQL CLR).

Run single SQL build pipeline at a time

TLDR
I want to run only one instance of a build at a time that shares a resource that can only handle one execution at a time while still being able to run multiple agents at a time on other builds. Another option is to run an Azure SQL database instance within the build, if that is even possible.
1.) How can I restrict builds that share a resource to 1 agent at a time? I was looking for a named azure agent that can be limited or add some sort of namespace that can only be used one at a time.
2.) What would be a better approach to testing a SQL install script as part of a build pipeline.
Details
I have several build pipelines and release pipelines setup in Azure. One of my pipelines is testing a large SQL script for initializing new instances of a database. This is performed using the Azure SQL Execute Query task. Any errors encountered when running the SQL is supposed to kick back to Github as a failed build. However, when I increased the number of agents from 1 to 2 I encounter an issue every now and then where a build is triggered before the previous one finishes. This breaks the first build.
Here are the agents I am using
To limit one agent to builds, if i understand you correctly. You can try below steps.
First go to your build pipeline, Edit your pipeline. Specify the demands for your pipeline. Then the pipeline will only run on the agent that satisfy the demands.
For example, if i specify the demands like below pic. Then the pipeline will only run on the agent named agentname.
And you can find detailed capabilities of your agent and define custom capabilities. go agent pools under Pipelines in your Project Settings page.
Select your agent pool, select your agent and Add a new capablity to add custom capabilities. You can then use the custom defined capabilities as demands.
Update:
In yaml style pipeline, You can specify which agent to run your pipeline by define vmImage, Check here for details
pool:
name: string # name of the pool to run this job in
demands: string | [ string ] ## see below
vmImage: string # name of the vm image you want to use
You can aslo try using Azure Pipelines agent pool, where you can point a specific agent to run your pipeline.
Hope you can find this helpful.

How to perform a command in a shell on remote server immediately after deploying some code from intellij idea?

I have a web server running on a virtual machine and I need some actions (e.g. "service apache2 reload") to be performed there automatically after I'll deploy my code from Idea
Automatically -- no way AFAIK.
https://youtrack.jetbrains.com/issue/WI-3344 -- watch this ticket (star/vote/comment) to get notified on any progress.
You may also watch related tickets:
https://youtrack.jetbrains.com/issue/WI-23938
https://youtrack.jetbrains.com/issue/WI-3239
The only manual solutions I may suggest right now are:
either keep SSH console opened (IDE has it built-in) and execute such command manually once deployed
or create "Remote SSH External Tools" entry that will do such job (connect and issue specified command) manually after deployment (once created you can assign custom shortcut to it so it can be run more easier).
In both cases -- check this manual.

SQL 2012 - SSIS Package not populating Text file when scheduled

I'm working on SQL 2012 Enterprise and I have a set of SSIS package exports which push data out to text files on a shared network folder. The packages aren't complex and under most circumstances they work perfectly. The problem I'm facing is that they do not work when scheduled - despite reporting that they have succeded.
Let me explain the scenarios;
1) When run manually from within BIDS, they work correctly, txt files are created and populated with data.
2) When deployed to the SSISDB and run from the Agent job they also work as expected - files are created and populate with data.
3) When the Agent job is scheduled to run in the evening, the job runs and reports success. The files are created but the data is not populated.
I've checked the reports on the Integration Services Catalogs and compared the messages line by line from the OnInformation. Both runs reports that the Flat File Destination wrote xxxx rows.
The data is there, the Agent account has the correct access. I cannot fathom why the job works when started manually, but behaves differently when scheduled.
Has anyone seen anything similar? It feels like a very strange bug....
Kind Regards,
James
Make sure that the account you have set up as the proxy for the SSIS task has read/write access to the file.
IMX, when you run an SQL Agent job manually, it appears to use the context of the user who initiates it in some way. I always assumed it was a side effect of impersonation. It's only when it actually runs with the schedule that everything uses the assigned security rights.
Additionally, I think when the user starts the job, the user is impersonating the proxy, but when the job is run via the schedule, the agent's account is impersonating the proxy. Make sure the service account has the right to impersonate the proxy. Take a look at sp_grant_login_to_proxy and sp_enum_login_for_proxy.
Here's a link that roughly goes through the process:
http://www.mssqltips.com/sqlservertip/2163/running-a-ssis-package-from-sql-server-agent-using-a-proxy-account/
I also recall this video being useful:
http://msdn.microsoft.com/en-us/library/dd440761(v=SQL.100).aspx
I had the same problem with Excel files. It was permission rights.
What worked for me was adding the SERVICE account to the folder's security tab. Then the SQL Agent can access the files.

WebLogic 10 WLST command to stop a deployment

Is there a WLST command to stop a Weblogig deployment? (i.e. the opposite of the nmStart() command)
If so, what is it?
I am changing database passwords and I want to shutdown all deployments so all connections will close. Currently I have to log into the console to shut everything down and I am looking for a quicker way.
I'd say nmKill but I'm not sure about the terminology you are using. The nmStart is used to start a server in the current domain using Node Manager, not to start a "deployment".
By the way, the WLS Console provides a recording feature that writes out the edits you make in the console to a WLST script. This can be very handy if you are not a WLST expert. To turn on recording, click 'Record' (in the the toolbar near the top of the page). Then, make your edits in the console. Finally, turn the recording off when you're done.
The more usual method, which is independant of whether you're using node manager or not, would be the shutdown command which is also able to work at the cluster level.