Truncate tables in kettle job is not working - pentaho

When we are using "truncate table" step which is in Utility, its not work.
We have created one job, after start step we use this step.
when we run this job its not work also does not show the errors.

What is the version of your PDI and please check the data base connection and table name.

Related

BigQuery scheduled query: Cannot create a transfer in > JURISDICTION_US when destination dataset is located in > REGION_ASIA_SOUTHEAST_1

I am getting this error quite frequently while trying to create a scheduled query
Error creating scheduled query: Cannot create a transfer in
JURISDICTION_US when destination dataset is located in
REGION_ASIA_SOUTHEAST_1
I just need a scheduled query to overwrite data in a table.
I had the same problem while trying to create a scheduled query with python:
400 Cannot create a transfer in REGION_EUROPE_WEST_1 when destination dataset is located in JURISDICTION_EU
I figured out that even my project is located in europe-west1 but my destination dataset was located in multinational location: Europe. I had to update my parent path : parent=project_path to '{project_path}/locations/eu' so that it works.
I hope that it helps someone.
It's look like as a bug from BQ.
I got the same problems, with source and destination dataset located in EU both.
I've change just for testing purpose the destination for an other EU dataset, and it works.
I've finally update the scheduled query to use my first destination choice and now it works.. I can't explain why, but it's seem to be a workaround.
Maybe, you can try with starting from the Scheduled Queries BigQuery UI and click on "+ create schedule query" button, then I don't get error. If I start directly in BigQuery UI I get the same error.
As I tried, it may happen because I have existed table with the same id as the destination table. This happens even if the table is the result of manually running that query and saved.
I faced the same issue recently.I tried 2 things and they worked:
try setting the query location to destination dataset/table location, then try scheduling the query.
If that does not work try to run the query and save results to the intended table in bigquery i.e. try creating the destination table with storing the results of the query you are trying to schedule first. Then try scheduling the query.
Both cases worked for me in different cases.
I had this error and tried many of the solutions in this thread. I tried a new session in an incognito window and it worked so I believe this is a transient issue as suggested.
I just scheduled query select 1 and then edited it to the needed one – it worked
I think trouble with time when start schedule. If it is in the past relative to local time, then bg tries to run the request on another server.
I had the same issue. The way I solved it was to disable the editor tabs (there is a button at the top). Then opened the query settings and set the processing location to EU manually.
I was using bq command when I came across this issue and was able to resolve it by adding parameter --location='europe-west1
So my final query looked like this
bq query \
--use_legacy_sql=false \
--display_name='my_table' \
--location='europe-west1' \
'''create or replace table my_dataset.my_table as (select * from external_query('projects/my_mysql_connection/locations/europe-west1/connections/bi', '(select * from my_table)'))'''

SSIS - connection to new database

I am new to SSIS so the question might seem simple. What I'm trying to do is to extract data from a source and load it into a new database which should be created in the process (not beforehand). I create that DB using Execute SQL task. However I encounter a problem as I'm unable to connect to that DB using data destination because DB does not exist at that moment.
Can you please help me with ideas how to solve this problem? Or maybe there is any other way how to create the kind of package I described?
I think you need to create db first in your sql server and then point to that db in destination connection. And map the columns with your source query or table with your destination table.
In your requirement you are asking to extract data from suppose Database1 and copy that data in database2. And this should be done during execution of SSIS package.
For this you need to use Execute SQL Task for Destination also.
For example:
Create database Database2;
Insert into Database2.TableName
Select * from Database1.TableName

variable for SQLLOGDIR not found

I am using ola hallengren script for maintenance solution. When I run just the Database backup job for user database I get the following error. Unable to start execution of step 1 (reason: Variable SQLLOGDIR not found). The step failed.
I have checked the directory permissions and there is no issue there. The script creates the job with no problem. I get error message when I try to run the job.
I had this same issue just the other day. I run a number of 2017 servers but the issue happened when I started running on a 2012 server.
I've dropped Ola a mail to confirm but best I can make out is that the SQLLOGDIR parameter specified in the 'advanced' tab for the step (for logging outputs) is not compatible with 2012 and maybe below 2017 though I have not tested these.
HTH,
Adam.
You need to replace this part in the advanced tab with the job name for example :
$(ESCAPE_SQUOTE(JOBNAME)) replace it with CommandLogCleanup_$(ESCAPE_SQUOTE(JOBID)) so then it will look like this:
$(ESCAPE_SQUOTE(SQLLOGDIR))\CommandLogCleanup_$(ESCAPE_SQUOTE(JOBID))_$(ESCAPE_SQUOTE(STEPID))_$(ESCAPE_SQUOTE(DATE))_$(ESCAPE_SQUOTE(TIME)).txt
instead of this:
$(ESCAPE_SQUOTE(SQLLOGDIR))\$(ESCAPE_SQUOTE(JOBNAME))_$(ESCAPE_SQUOTE(STEPID))_$(ESCAPE_SQUOTE(DATE))_$(ESCAPE_SQUOTE(TIME)).txt
Do this for all the other jobs if you don't want to recreate them.
I had the same issue on my SQL Server 2012 version, the error was during the dB backup using Ola's scripts, as mentioned above the issue is with the output file, I changed the location and the output file from the SQL Job and reran the job successfully (refer the attached screenshot for reference.
The error is related to the job output file.
When you create a maintenance job using the Ola script it will automatically assign output file to the step. Sometimes the location does not exist on the server.
I faced the same issue, then I ran the integrity script manually on the server and it completed without error, then I found that the error is in job configuration.
I changed the job output file location and now job also running fine.
The trick is to build the string for the #output_file_name parameter element by element before calling the stored procedure. If you look into Olas code you will see that is exactly what he is doing.
I have tried to describe this in more detail in the post Add SQL Agent job step with tokens in output file name.

SQL Server job to execute query from the output CSV file of first step

This is my first job creation task as a SQL DBA. First step of the job runs a query and sends the output to a .CSV. As a last step, I need the job to execute the query from the .CSV file (output of first step).
I have Googled all possible combinations but no luck.
your question got lost somehow ...
You last two comments make ist a little clearer.
If I understand it correctly you create a SQL script which restores all the logins, roles and users, their rights etc. into a newly created db.
If this created script is executable within a query window you can easily execute it with EXECUTE (https://msdn.microsoft.com/de-de/library/ms188332(v=sql.120).aspx)
Another approach could be SQLCMD (http://blog.sqlauthority.com/2013/04/10/sql-server-enable-sqlcmd-mode-in-ssms-sql-in-sixty-seconds-048/)
If you need further help, please come back with more details: What does your "CSV" look like? What have you tried so far?

pentaho spoon transformation to delete data from table

I'm trying to write a ETL job using pentaho data integration tool, in spoon. Used "delete" icon and provided the target tabl details but the rows are not getting deleted.and no error. I have access to the schema.Please suggest.
In order to use the "Delete" step first you need to have a data source where PDI will read the keys to look for in the table. So, your transformation should look like this:
In my example, the first step queries the origin table for a list of Ids to be deleted, and then passes them to the Delete step as keys to be used as condition for the delete instruction.