Kettle '?' not working Table Input Step - pentaho

I want to get all the table names from the database and then get all the rows from the tables. So I created a transformation like this:
Get Table Names: Added the database connection and stored the table name in a output field called "tablename".
Table Input: Marked "Replace variables in script" and "Execute for each row". Added the first step in "Insert data from step". SQL is "SELECT * from ?".
I have read up a lot of tutorials online, including the documentation.
My problem is that everywhere it says that I my "?" should be replaced with the parameter. But this does not happen. Here are the logs:
2013/06/22 03:33:25 - Get table names.0 - Starting to run...
2013/06/22 03:33:25 - Postgres 9.1.9 RO - read :9 table names from db meta-data.
2013/06/22 03:33:25 - Table input.0 - Query parameters found = [stackexchange2]
2013/06/22 03:33:25 - Table input.0 - SQL query : SELECT * from ?
2013/06/22 03:33:25 - Table input.0 - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : Unexpected error
2013/06/22 03:33:25 - Table input.0 - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : org.pentaho.di.core.exception.KettleDatabaseException:
2013/06/22 03:33:25 - Table input.0 - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : An error occurred executing SQL:
2013/06/22 03:33:25 - Table input.0 - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : SELECT * from ?
2013/06/22 03:33:25 - Table input.0 - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) : ERROR: syntax error at or near "$1"
Position: 16
2013/06/22 03:33:25 - Table input.0 - ERROR (version 4.4.0-stable, build 17588 from 2012-11-21 16.02.21 by buildguy) :
I am using Kettle 4.4. Downloaded the spoon client from here.
UPDATE
I just want to make this work. I am learning the tool right now, and it would be good to know how '?' works.

To solve your situation, i prefer to work with jobs, please find {kettle_intalation_folder_path}/examples/jobs/process all tables/Process all tables.kjb, because your case is a simplification of that example.

Actually you can do this and you absolutely would do this. Take a look at "Metadata Injection".
Also you can't parameterise a table name - jdbc doesnt allow this. However you can get away with using the SQL step rather than table input and generate the whole SQL string in a field if you really want to. Wouldnt really recommend that though.
As said above, please describe further exactly what you're trying to do. If you're just migrating a database without doing any transformation of the data, i.e. from one db to another, then dont bother with Kettle as thats not what it's for.

Related

why dbt runs in cli but throws an error on cloud UI for the exact same model?

I am executing dbt run -s model_name on CLI and the task completes successfully. However, when I run the exact same command on dbt cloud, I get this error:
Syntax or semantic analysis error thrown in server while executing query.
Error message from server: org.apache.hive.service.cli.HiveSQLException:
Error running query: org.apache.spark.sql.AnalysisException: cannot
resolve '`pv.meta.uuid`' given input columns: []; line 6 pos 4;
\n'Project ['pv.meta.uuid AS page_view_uuid#249595,
'pv.user.visitorCookieId AS (80) (SQLExecDirectW)")
it looks like it fails recognizing 'pv.meta.uuid' syntax which extract data from a json format. It is not clear to me what is going on. Any thoughts? Thank you!

Error using Microsoft Access Input to read .mdb file with Spoon

I'm trying to read an Access database file, .mdb file with Spoon,
using Microsoft Access Input in Design tab.
But it is not working.
When I press Get Tables in Content tab I get this error: Looking for usage map at page 32000, but page type is 0
If I put one table and press preview rows I get this error:
019/08/27 11:35:32 - Spoon - Spoon
2019/08/27 11:52:22 - /Transformación 1 - Iniciado despacho de la transformación [/Transformación 1]
2019/08/27 11:52:22 - Microsoft Access Input.0 - ERROR (version 8.0.0.0-28, build 8.0.0.0-28 from 2017-11-05 07.27.50 by buildguy) : Couldn't open file #1 : file:///home/pentaho/Escritorio/general.mdb --> java.io.IOException: Looking for usage map at page 32000, but page type is 0
2019/08/27 11:52:22 - Microsoft Access Input.0 - Procesamiento finalizado (I=0, O=0, R=0, W=0, U=0, E=1)
2019/08/27 11:52:22 - /Transformación 1 - Transformación detectada
2019/08/27 11:52:22 - /Transformación 1 - Transformación esta matando los otros pasos!
Thank you.

Unexpected error running Liquibase: ERROR: relation "databasechangelog" does not exist

When running a liquibase (version 3.5.3) deployment against a new postgres database, we are getting the error below. The table, databasechangelog, did not get created by liquibase, but the table, databasechangeloglock, did get created.
INFO 2/7/17 1:27 PM: liquibase: Successfully acquired change log lock
INFO 2/7/17 1:27 PM: liquibase: Successfully released change log lock
Unexpected error running Liquibase: ERROR: relation "audit.databasechangelog" does not exist
Position: 20
SEVERE 2/7/17 1:27 PM: liquibase: ERROR: relation "audit.databasechangelog" does
not exist
Position: 20
liquibase.exception.DatabaseException: Error executing SQL SELECT MD5SUM FROM au
dit.databasechangelog WHERE MD5SUM IS NOT NULL LIMIT 1: ERROR: relation "audit.d
atabasechangelog" does not exist
Position: 20
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:68)
at liquibase.executor.jvm.JdbcExecutor.query(JdbcExecutor.java:126)
at liquibase.executor.jvm.JdbcExecutor.query(JdbcExecutor.java:134)
at liquibase.executor.jvm.JdbcExecutor.queryForList(JdbcExecutor.java:20
0)
at liquibase.executor.jvm.JdbcExecutor.queryForList(JdbcExecutor.java:19
4)
at liquibase.changelog.StandardChangeLogHistoryService.init(StandardChan
geLogHistoryService.java:212)
at liquibase.Liquibase.checkLiquibaseTables(Liquibase.java:1124)
at liquibase.Liquibase.update(Liquibase.java:205)
at liquibase.Liquibase.update(Liquibase.java:192)
at liquibase.integration.commandline.Main.doMigration(Main.java:1130)
at liquibase.integration.commandline.Main.run(Main.java:188)
at liquibase.integration.commandline.Main.main(Main.java:103)
Caused by: org.postgresql.util.PSQLException: ERROR: relation "audit.databasecha
ngelog" does not exist
Position: 20
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryEx
ecutorImpl.java:2455)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutor
Impl.java:2155)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.ja
va:288)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:430)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:356)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:303
)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:289
)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:266
)
at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:233)
at liquibase.executor.jvm.JdbcExecutor$QueryStatementCallback.doInStatem
ent(JdbcExecutor.java:345)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:55)
... 11 more
There are two schemas, ods and audit. And the search_path is ods, audit, public. We specify the target schema in the connection string (currentSchema=audit). Additionally, we ran successfully against the ods schema.
As a workaround, we can manually create the log table. However, I am wondering if this is a bug with liquibase or if we are doing something wrong? My thought is that liquibase is somehow seeing the ods.databasechangelog and skips creating it.
Any thoughts would be appreciated.
m
Maybe try to use following liquibase parameter:
--defaultSchemaName=<schema>

Avro : java.lang.RuntimeException: Unsupported type in record

Input: test.csv
100
101
102
Pig Script :
REGISTER required jars are registered;
A = LOAD 'test.csv' USING org.apache.pig.piggybank.storage.CSVExcelStorage() AS (code:chararray);
STORE A INTO 'test' USING org.apache.pig.piggybank.storage.avro.AvroStorage
('schema',
'{"namespace":"com.pig.test.avro","type":"record","name":"Avro_Test","doc":"Avro Test Schema",
"fields":[
{"name":"code","type":["string","null"],"default":null}
]}'
);
Getting a runtime error while STORE. Any inputs on resolving the same.
Error Log :
ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate exception from backed error: org.apache.avro.file.DataFileWriter$AppendWriteException: java.lang.RuntimeException: Unsupported type in record:class java.lang.String
at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:263)
at org.apache.pig.piggybank.storage.avro.PigAvroRecordWriter.write(PigAvroRecordWriter.java:49)
at org.apache.pig.piggybank.storage.avro.AvroStorage.putNext(AvroStorage.java:722)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:558)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:106)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMap
2015-06-02 23:06:03,934 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2015-06-02 23:06:03,934 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
Looks like this is a bug: https://issues.apache.org/jira/browse/PIG-3358
If you can, try to update to pig 0.14, according to the comments this has been fixed.

Not able to generate liquibase changelog

I am trying to execute command on my command prompt:
liquibase --driver=com.mysql.jdbc.Driver --classpath=E:\mysqljar\mysql.jar --changeLogFile=E:\1.xml --url="jdbc:mysql://localhost:3306/abc" --username=root --password=root generateChangeLog
But I am getting this error:
Liquibase Update Failed: Empty result set, expected one row
SEVERE 24/9/13 6:29 PM:liquibase: Empty result set, expected one row
liquibase.exception.DatabaseException: Error getting jdbc:mysql://localhost:3306/abc view with liquibase.statement.core.GetViewDefinitionStatement#53330681
at liquibase.snapshot.jvm.JdbcDatabaseSnapshotGenerator.readView(JdbcDatabaseSnapshotGenerator.java:168)
at liquibase.snapshot.jvm.JdbcDatabaseSnapshotGenerator.readViews(JdbcDatabaseSnapshotGenerator.java:304)
at liquibase.snapshot.jvm.JdbcDatabaseSnapshotGenerator.createSnapshot(JdbcDatabaseSnapshotGenerator.java:241)
at liquibase.snapshot.DatabaseSnapshotGeneratorFactory.createSnapshot(DatabaseSnapshotGeneratorFactory.java:69)
at liquibase.diff.Diff.compare(Diff.java:63)
at liquibase.integration.commandline.CommandLineUtils.doGenerateChangeLog(CommandLineUtils.java:145)
at liquibase.integration.commandline.Main.doMigration(Main.java:760)
at liquibase.integration.commandline.Main.main(Main.java:134)
Caused by: liquibase.exception.DatabaseException: Empty result set, expected one row
at liquibase.util.JdbcUtils.requiredSingleResult(JdbcUtils.java:124)
at liquibase.executor.jvm.JdbcExecutor.queryForObject(JdbcExecutor.java:159)
at liquibase.executor.jvm.JdbcExecutor.queryForObject(JdbcExecutor.java:167)
at liquibase.executor.jvm.JdbcExecutor.queryForObject(JdbcExecutor.java:163)
at liquibase.database.AbstractDatabase.getViewDefinition(AbstractDatabase.java:748)
at liquibase.snapshot.jvm.JdbcDatabaseSnapshotGenerator.readView(JdbcDatabaseSnapshotGenerator.java:166)
... 7 more
Could anyone help me to interpret this?
I had the same problem.
My fix:
There needs to be an entry in the database change log lock table; it needs to have id=1, locked=false and the rest of the values set to null.
I'm getting the same error also using MySQL and tried v3.0.0 and 3.0.5 of liquibase. The error was the same when I tried to do a migrate as well as a generateChangeLog.
./liquibase --logLevel=debug --changeLogFile=./db.changelog-test-v0.1.xml --username=abc --password=abc99 --url="jdbc:mysql://localhost:3306/test" migrate
FYI, here is the select statement that it had trouble executing for the 'migrate' command:
select view_definition from information_schema.views where table_name='patient_info' and table_schema='test'
The information_schema.views table is empty.