Why does mapping to a List inside lifted query fail with "Slick does not know how to map the given types."? - slick-2.0

I am trying to map to a List inside a Slick lifted query, and I get a compilation error:
No matching Shape found.
[error] Slick does not know how to map the given types.
[error] Possible causes: T in Table[T] does not match your * projection. Or you use an unsupported type in a Query (e.g. scala List).
[error] Required level: scala.slick.lifted.FlatShapeLevel
[error] Source type: Seq[String]
[error] Unpacked type: T
[error] Packed type: G
Why is it that in a non-Slick map operation, I can map to any type, whereas in a Slick query, I can map to simple Scala types, but not to a Scala List?

Try to
import slick.driver.(yourDB)Driver.api._

Related

Mapping Data Flows - Cannot retrieve value from cached sink

I am trying lookup up a value from a cached sink. The Dataflow looks like the following
I have created a hash value in my cashed sink and want to reference that in my main pipeline.
My key for the cached sink is an array of columns. When I preview the data I get results.
My derived column is then trying to do a lookup against the cached data and running into an error.
When debugging I get the following error. What am I missing or getting wrong in this statement?
Spark job failed: {
"text/plain": "{"runId":"98c9bae9-210e-4791-9b0d-60bc557ff416","sessionId":"02bc59a8-ac6f-4eeb-952c-2e9bdda49691","status":"Failed","payload":{"statusCode":400,"shortMessage":"DF-SYS-01 at Derive 'GenerateHashKey': java.util.NoSuchElementException: key not found: Id","detailedMessage":"Failure 2022-04-26 04:07:47.375 failed DebugManager.processJob, run=98c9bae9-210e-4791-9b0d-60bc557ff416, errorMessage=DF-SYS-01 at Derive 'GenerateHashKey': java.util.NoSuchElementException: key not found: Id"}}\n"
} - RunId: 98c9bae9-210e-4791-9b0d-60bc557ff416
Thanks

CSV file input not working together with set field value step in Pentaho Kettle

I have a very simple Pentaho Kettle transformation that causes a strange error. It consists of reading a field X from a CSV, add a field Y, set Y=X and finally write it back to another CSV.
Here you can see the steps and the configuration for them:
You can also download the ktr file from here. The input data is just this:
1
2
3
When I run this transformation, I get this error message:
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Unexpected error
ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : org.pentaho.di.core.exception.KettleStepException:
Error writing line
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:273)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFileOutput.processRow(TextFiIeOutput.java:195)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
atjava.Iang.Thread.run(Unknown Source)
Caused by: org.pentaho.di.core.exception.KettleStepException:
Error writing field content to file
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:435)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeRowToFile(TextFiIeOutput.java:249)
3 more
Caused by: org.pentaho.di.core.exception.KettleVaIueException:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
at org.pentaho.di.core.row.vaIue.VaIueMetaBase.getBinaryString(VaIueMetaBase.java:2185)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.formatField(TextFiIeOutput.java:290)
at org.pentaho.di.trans.steps.textfiIeoutput.TextFiIeOutput.writeField(TextFileOutput.java:392)
4 more
All of the above lines start with 2015/09/23 12:51:18 - Text file output.0 -, but I edited it out for brevity. I think the relevant, and confusing, part of the error message is this:
Y Number : There was a data type error: the data type of [B object [[B©b4136a] does not correspond to value meta [Number]
Some further notes:
If I bypass the set field value step by using the lower hop instead, the transformation finish without errors. This leads me to believe that it is the set field value step that causes the problem.
If I replace the CSV file input with a data frame with the same data (1,2,3) everything works just fine.
If I replace the file output step with a dummy the transformation finish without errors. However, if I preview the dummy, it causes a similar error and the field Y has the value <null> on all three rows.
Before I created this MCVE I got the error on all sorts of seemingly random steps, even when there was no file output present. So I do not think this is related to the file output.
If I change the format from Number to Integer, nothing changes. But if I change it to string the transformations finish without errors, and I get this output:
X;Y
1;[B#49e96951
2;[B#7b016abf
3;[B#1a0760b0
Is this a bug? Am I doing something wrong? How can I make this work?
It's because of lazy conversion. Turn it off. This is behaving exactly as designed - although admittedly the error and user experience could be improved.
Lazy conversion must not be used when you need to access the field value in your transformation. That's exactly what it does. The default should probably be off rather than on.
If your field is going directly to a database, then use it and it will be faster.
You can even have "partially lazy" streams, where you use lazy conversion for speed, but then use select values step, to "un-lazify" the fields you want to access, whilst the remainder remain lazy.
Cunning huh?

Liquibase diff without indexes fails when diffChangeLog declared

i'm new to liquibase and i was playing around with the diff command. It works perfectly fine but recently found this and i can't figure out why it's not functioning in this specific context.
so the main problem is that i want to compare two databases but without indexes. these are dynamically generated on primary keys and get different names, but are in fact equivalent. liquibase does not understand so i want to run diff without indexes.
so i add this to my pom.xml:
<diffTypes>tables, views, columns, primaryKeys, foreignKeys, uniqueconstraints</diffTypes>
it runs as expected, liquibase does not compare indexes.
in the next step, i want to generate the diff as changelog, so i add a diffChangeLog-File
<diffTypes>tables, views, columns, primaryKeys, foreignKeys, uniqueconstraints</diffTypes>
<diffChangeLogFile>src/main/diffs/diff_test.xml</diffChangeLogFile>
when running liquibase:diff, it fails:
[ERROR] Failed to execute goal org.liquibase:liquibase-maven-plugin:3.4.1:diff (default-cli) on project liquibase_artifactID: Error setting up or running Liquibase: liquibase.command.CommandExecutionException: liquibase.exception.UnexpectedLiquibaseException: Could not resolve MissingObjectChangeGenerator dependencies due to dependency cycle. Dependencies:
[ERROR] [] -> Catalog -> []
[ERROR] [] -> Schema -> []
[ERROR] [Index] -> ForeignKey -> []
[ERROR] [] -> UniqueConstraint -> []
[ERROR] [] -> Column -> []
[ERROR] [] -> Table -> []
[ERROR] [] -> PrimaryKey -> []
[ERROR] [] -> View -> []
[ERROR] -> [Help 1]
Why does liquibase act like this? Is it "illegal" to generate a diffChangeLog without indexes?
When including indexes to diffTypes it works, but the generated Changelog is unusable because liquibase wants to change the indexes with createIndex and dropIndex. But these statements are not executable (it fails to drop an index on primary keys and it can't create an index when it already exists).
Any ideas how to generate a usable changelog without indexes? Or did i just miss something?
The answer to the question is there in the exception message:
Could not resolve MissingObjectChangeGenerator dependencies due to dependency cycle.
It then lists the dependencies.
Internally, Liquibase is generating a directed graph of dependencies and making sure that the dependencies are all satisfied. If you would like to see the code that does this, see the class DiffToChangeLog and its internal private class DependencyGraph

Dbeaver Connecting to Hive - SQLException: Method not supported

I'm getting this error when trying to run a select after connecting to Hive.
Is this a bad jar file?
org.jkiss.dbeaver.model.impl.jdbc.JDBCException: SQL Error: Method not supported
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCConnectionImpl.prepareStatement(JDBCConnectionImpl.java:170)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCConnectionImpl.prepareStatement(JDBCConnectionImpl.java:1)
at org.jkiss.dbeaver.model.DBUtils.createStatement(DBUtils.java:985)
at org.jkiss.dbeaver.model.DBUtils.prepareStatement(DBUtils.java:963)
at org.jkiss.dbeaver.runtime.sql.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:313)
at org.jkiss.dbeaver.runtime.sql.SQLQueryJob.extractData(SQLQueryJob.java:633)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsProvider.readData(SQLEditor.java:1169)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetDataPumpJob.run(ResultSetDataPumpJob.java:132)
at org.jkiss.dbeaver.runtime.AbstractJob.run(AbstractJob.java:91)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54)
Caused by: java.sql.SQLException: Method not supported
at org.apache.hadoop.hive.jdbc.HiveConnection.createStatement(HiveConnection.java:229)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCConnectionImpl.createStatement(JDBCConnectionImpl.java:350)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCConnectionImpl.prepareStatement(JDBCConnectionImpl.java:138)
... 9 more
There is a calls in hive jdbc jar called org.apache.hive.jdbc.HiveResultSetMetaData . This class contains a method isWritable which is not supported by hive yet. This is the reason why you get the error "Method not supported".
Take the source code of this class and update the above method. Then generate the class and replaced it in the jar. This worked for me.

play.exceptions.JavaExecutionException: org.hibernate.MappingException: No Dialect mapping for JDBC type

I have a play framework application [version 1.2.7].
when I try to get data in a non-jpa Entity form I get this exception
play.exceptions.JavaExecutionException: org.hibernate.MappingException: No Dialect mapping for JDBC type: 16
at play.mvc.ActionInvoker.invoke(ActionInvoker.java:237)
at Invocation.HTTP Request(Play!)
Caused by: javax.persistence.PersistenceException: org.hibernate.MappingException: No Dialect mapping for JDBC type: 16
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1389)
at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1317)
at org.hibernate.ejb.QueryImpl.getResultList(QueryImpl.java:255)
my code is as follows
List lst = JPA.em().createNativeQuery(myQuery)
.setParameter("p", searchPhrase)
.getResultList();
I want to use native query and get all extract all data but I dont want to be limited to getting it as an Jpa Entity.
any help will be appreciated.
If I'm not wrong the JDBC type 16 is BOOLEAN (look atjava.sql.Types). Your problem might be caused by use of wrong hibernate.dialect which doesn't know BOOLEAN types which occur in one of tables you are trying to select. Check hibernate.dialect property in your persistence.xml or hibernate.cfg.xml.