liquibase-hibernate shows all tables as "unexpected" - liquibase

I followed these steps to get liquibase-hibernate working. I hope I correctly understood the instructions in the wiki.
Our hibernate entities are declared in the file applicationContext.xml. We do not have a hibernate.cfg.xml. My liquibase properties are:
url=jdbc:postgresql://localhost:1234/MY_DATABASE
username=user
password=pass
referenceUrl=hibernate:spring:somePackage?dialect=org.hibernate.dialect.PostgreSQLDialect
The thing is no matter what I enter as somePackage, liquibase shows everything (tables, columns, constraints) as "unexpected". Liquibase "finds" somePackage even if it does not exist.
liquibase diff
INFO 09.08.17 10:41: liquibase-hibernate: Reading hibernate configuration hibernate:spring:somePackage?dialect=org.hibernate.dialect.PostgreSQLDialect
INFO 09.08.17 10:41: liquibase-hibernate: Found package somePackage
And the comparison result is like
Reference Database: null # hibernate:spring:somePackage?dialect=org.hibernate.dialect.PostgreSQLDialect (Default Schema: HIBERNATE)
Comparison Database: postgres # jdbc:postgresql://localhost:1234/MY_DATABASE (Default Schema: public)
Compared Schemas: HIBERNATE -> public
Product Name:
Reference: 'Hibernate'
Target: 'PostgreSQL'
Product Version:
Reference: '4.3.11.Final'
Target: '9.5.4'
Missing Catalog(s): NONE
Unexpected Catalog(s): NONE
Changed Catalog(s):
HIBERNATE
name changed from 'HIBERNATE' to 'MY_DATABASE'
Missing Column(s): NONE
[...]
Unexpected Table(s):
activityentity
addressentity
advertisemententity
advertisementusageentity
[...]
I really don't know what's going on or whether I'm doing something wrong. Any help would be appreciated.

Related

dbt relationship test compilation error: test definition dictionary must have exactly one key

I'm a new user of dbt, trying to write a relationship test:
- name: PROTOCOL_ID
tests:
- relationships:
to: ref('Animal_Protocols')
field: id
I am getting this error:
Compilation Error
Invalid test config given in models/Animal_Protocols/schema.yml:
test definition dictionary must have exactly one key, got [('relationships', None), ('to', "ref('Animal_Protocols')"), ('field', 'id')] instead (3 keys)
#: UnparsedNodeUpdate(original_file_path='model...ne)
"unique" and "not-null" tests in the same file are working fine, but I have a similar error with "accepted_values".
I am using dbt cli version 0.21.0 with Snowflake on MacOS Big Sur 11.6.
You are very close! I'm 96% sure that this is an indentation issue -- the #1 pain point of working with YAML. The solution is that both to and field need to be indented below the relationships key as opposed to at the same level.
See the Tests dbt docs page for an example
- name: PROTOCOL_ID
tests:
- relationships:
to: ref('Animal_Protocols')
field: id

Presto cannot access the web page while using SSL

my presto version is 0.240
my operation: i want to use ssl for use https in presto
so i change my config refer only by this url: https://trino.io/docs/current/security/internal-communication.html
but i can't Access to the presto address https://192.168.100.142:9999/
I don't know which step I did wrong.
What should I do to implement HTTPS for Presto?
this is my config:
A cluster of two machines
node 1 142 hostname:sbider-dev-01
/opt/presto-server-0.240/etc/config.properties
coordinator=true
node-scheduler.include-coordinator=true
query.max-memory=7.5GB
query.max-memory-per-node=3.5GB
query.max-total-memory-per-node=3.5GB
experimental.reserved-pool-enabled=false
memory.heap-headroom-per-node=0.5GB
#experimental.spill-enabled=true
#experimental.max-spill-per-node=8GB
#experimental.query-max-spill-per-node=8GB
query.low-memory-killer.policy=total-reservation-on-blocked-nodes
#http-server.http.port=9999
#discovery-server.enabled=true
#discovery.uri=http://192.168.100.142:9999
internal-communication.shared-secret="8HRJWX41DwtuYZcNw8uMbshA8wDLoLS78tT3UVL+Z+m0xG7KCygGurE9SXEbGy2bLtPLza1MhAnWJp2mJp/S+j9EFWWuztXz7cHJhSz9QFiVxYCs1Wzn+IVKgHD5z+iGbdKjwRtgUjwNvS4MIfqwqwKlVZiEtGgEDv7j/kAgpOYPvFCRJfb/U/+b7qPpwPNDA6kXu3Dj5p1Q81+kmbFO59WSh6c4QwqdbFHAaY8XFWo8tIogxpmwQQqV3BvICmesxlIhBH/pOGgoyl86QQ/TaAMaWjaddNcgO5keTGhhOj/juGZ/gbOL/PHGNs1ENSPRnjvIGLHFQPDrm36YenhfTH5L7X0Q9HwwnEpEoYkDJsmMEV+elPZK767nZXHryuvDvHGs0PhYSRO8ekOgC3CaE1tfiGh5M9H5C2fnyeGRQ0iwtgXh83kRDuPzVrRx5yj2cHQJOZu+CcXCJ3aa1Tijxq56RfdcEz9Frr8n8aXaNMtRlchcXn3+B4biByS9duq28VHHBDlyYQQ6VSKbLDt1GBi5oOQICtrGuOY+/MD+rnV5uxPUQcSIh9KmA1WjahJEz0ItDKpB66JgVkTrVDWEJPeozKTvHRLG9sBudRhQ5abJGEAhx9b78dUbTcEkRlPuvUN1WjwVlUzjyUDKd14ocuhpoOBzjV9kFhTqQZ4zgNo="
http-server.http.enabled=false
#node.internal-address-source=FQDN
node.internal-address=sbider-dev-01,sbider-dev-02
http-server.https.enabled=true
http-server.https.port=9999
# jks文件全路径
http-server.https.keystore.path=/ceshi/keystore.jks
http-server.https.keystore.key=123456
discovery.uri=https://192.168.100.142:9999
internal-communication.https.required=true
internal-communication.https.keystore.path=/ceshi/keystore.jks
internal-communication.https.keystore.key=123456
node 2 143 hostname cat /opt/presto-server-0.240/etc/config.properties
coordinator=flase
query.max-memory=7.5GB
query.max-memory-per-node=3.5GB
query.max-total-memory-per-node=3.5GB
experimental.reserved-pool-enabled=false
memory.heap-headroom-per-node=0.5GB
#experimental.spill-enabled=true
#experimental.max-spill-per-node=8GB
#experimental.query-max-spill-per-node=8GB
query.low-memory-killer.policy=total-reservation-on-blocked-nodes
#discovery.uri=http://192.168.100.142:9999
internal-communication.shared-secret="8HRJWX41DwtuYZcNw8uMbshA8wDLoLS78tT3UVL+Z+m0xG7KCygGurE9SXEbGy2bLtPLza1MhAnWJp2mJp/S+j9EFWWuztXz7cHJhSz9QFiVxYCs1Wzn+IVKgHD5z+iGbdKjwRtgUjwNvS4MIfqwqwKlVZiEtGgEDv7j/kAgpOYPvFCRJfb/U/+b7qPpwPNDA6kXu3Dj5p1Q81+kmbFO59WSh6c4QwqdbFHAaY8XFWo8tIogxpmwQQqV3BvICmesxlIhBH/pOGgoyl86QQ/TaAMaWjaddNcgO5keTGhhOj/juGZ/gbOL/PHGNs1ENSPRnjvIGLHFQPDrm36YenhfTH5L7X0Q9HwwnEpEoYkDJsmMEV+elPZK767nZXHryuvDvHGs0PhYSRO8ekOgC3CaE1tfiGh5M9H5C2fnyeGRQ0iwtgXh83kRDuPzVrRx5yj2cHQJOZu+CcXCJ3aa1Tijxq56RfdcEz9Frr8n8aXaNMtRlchcXn3+B4biByS9duq28VHHBDlyYQQ6VSKbLDt1GBi5oOQICtrGuOY+/MD+rnV5uxPUQcSIh9KmA1WjahJEz0ItDKpB66JgVkTrVDWEJPeozKTvHRLG9sBudRhQ5abJGEAhx9b78dUbTcEkRlPuvUN1WjwVlUzjyUDKd14ocuhpoOBzjV9kFhTqQZ4zgNo="
http-server.http.enabled=false
#node.internal-address-source=FQDN
node.internal-address=sbider-dev-01,sbider-dev-02
http-server.https.enabled=true
http-server.https.port=9999
http-server.https.keystore.path=/ceshi/keystore.jks
http-server.https.keystore.key=123456
discovery.uri=https://192.168.100.142:9999
internal-communication.https.required=true
internal-communication.https.keystore.path=/ceshi/keystore.jks
internal-communication.https.keystore.key=123456
server log in sbider-dev-01: cat /opt/presto-server-0.240/var/log/server.log
Companion catalogs: catalog_name1=catalog_name2,catalog_name3=catalog_name4,...
2021-01-12T12:41:09.766+0800 INFO main Bootstrap transaction.idle-check-interval 1.00m 1.00m Time interval between idle transactions checks
2021-01-12T12:41:09.766+0800 INFO main Bootstrap transaction.idle-timeout 5.00m 5.00m Amount of time before an inactive transaction is considered expired
2021-01-12T12:41:09.767+0800 INFO main Bootstrap transaction.max-finishing-concurrency 1 1 Maximum parallelism for committing or aborting a transaction
2021-01-12T12:41:09.767+0800 WARN main Bootstrap UNUSED PROPERTIES
2021-01-12T12:41:09.767+0800 WARN main Bootstrap internal-communication.shared-secret
2021-01-12T12:41:09.767+0800 WARN main Bootstrap
2021-01-12T12:41:11.037+0800 ERROR main com.facebook.presto.server.PrestoServer Unable to create injector, see the following errors:
1) Configuration property 'internal-communication.shared-secret' was not used
at com.facebook.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:238)
1 error
com.google.inject.CreationException: Unable to create injector, see the following errors:
1) Configuration property 'internal-communication.shared-secret' was not used
at com.facebook.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:238)
1 error
at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:543)
at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:159)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:106)
at com.google.inject.Guice.createInjector(Guice.java:87)
at com.facebook.airlift.bootstrap.Bootstrap.initialize(Bootstrap.java:245)
at com.facebook.presto.server.PrestoServer.run(PrestoServer.java:131)
at com.facebook.presto.server.PrestoServer.main(PrestoServer.java:77)
You're following Trino (fka Presto SQL) documentation for securing internal documentation, but got Presto binary from facebook's fork of the project (prestodb).
Go to https://trino.io/download.html to get latest Trino release.
The alternative solution (using prestodb's documentation and prestodb's binary) is NOT a safe, viable alternative, due to security issues known and not fixed in prestodb code base.

How to log in hibernate which part of the code caused a given SQL

We can turn on all of the SQL related logging with the following settings in spring:
spring.jpa.properties.hibernate.show_sql=true
spring.jpa.properties.hibernate.use_sql_comments=true
spring.jpa.properties.hibernate.format_sql=true
logging.level.org.hibernate.type=trace
If we have a standalone hibernate/springdata command like
myEntityRepository.save(myEntity);
OR
enityManager.persist(myEntity);
then it is easy to debug what happened just by reading the generated SQL from the log.
But, how would you debug when there isn't any explicit ORM action like here:
#Transactional
void doHundredOfTask(Long id){
MyEntity myEntity = myEntityRepository.findById(id);
// here comes ton of action on the entity like settings field,setting/adding to collection
// myEntity.setField1()..
//myEntity.setField2()
// ....
// myEntity.setField_N()
// myEntity.getSomeList.get(0).setSomeField()
// no ORM action
}
At the end we don't explicitly save anything but after the transaction hibernate will flush the changes, hence a massive amount of SQL will occur in the log. If you have a ton of action on the entity and on it's associations then it is extremly hard to debug why a given SQL was triggered.
Is there a way to assign the generated SQL to the triggering code in the log?
edit: Right know all I can do is splitting up the code to smaller chunks / or commenting out some part of it. But this process is slow..
p6spy can print a stacktrace for each executed SQL statement. Here is configuration to enable this: stacktrace=true.
How to configure p6spy for maven project:
Add p6spy dependency
<dependency>
<groupId>p6spy</groupId>
<artifactId>p6spy</artifactId>
<version>3.9.1</version>
</dependency>
Wrap the jdbc connection with p6spy:
spring.datasource.url=jdbc:p6spy:mysql://localhost:3306/xxx
spring.datasource.driver-class-name=com.p6spy.engine.spy.P6SpyDriver
Add spy.properties config src/main/resources/spy.properties
stacktrace=true
appender=com.p6spy.engine.spy.appender.Slf4JLogger
logMessageFormat=com.p6spy.engine.spy.appender.MultiLineFormat
You can remove the properties bellow:
spring.jpa.properties.hibernate.show_sql=true
spring.jpa.properties.hibernate.use_sql_comments=true
spring.jpa.properties.hibernate.format_sql=true
With this configuration, p6spy will output SQL and the stacktrace. E.g.:
select x0_.id as id1_7_ from X x0_
15:10:16.166 default [main] INFO c.p.e.spy.appender.Slf4JLogger[logException]-39 -
java.lang.Exception: null
at com.p6spy.engine.common.P6LogQuery.doLog(P6LogQuery.java:126)
...
at org.hibernate.loader.Loader.getResultSet(Loader.java:2341)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:2094)
...
at com.springapp.Test.test(Test.java:36)
...

how to do count in flink sql

I'd like to do count(0)in flink SQL, but it gives exception like
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: SQL parse failed. UDT in DDL is not supported yet.
don't know is there anything wrong?
expect the output should work fine
INSERT INTO request_join
select requestId,count(0) from requests
GROUP BY TUMBLE(rowtime, INTERVAL '1' HOUR),requestId;
The schema of the table is here
name: request_join
schema:
- '`requestId` VARCHAR'
- '`count` LONG'
properties:
'connector.type': 'kafka'
'connector.version': 'universal'
'connector.topic': 'request_join_test'
'connector.startup-mode': 'latest-offset'
'connector.properties.0.key': 'zookeeper.connect'
'connector.properties.0.value': '10.XXXXXXXXX'
'connector.properties.1.key': 'bootstrap.servers'
'connector.properties.1.value': '10.XXXXXXXXX'
'connector.properties.2.key': 'group.id'
'connector.properties.2.value': 'request_join_test'
'update-mode': 'append'
'format.type': 'json'
'format.json-schema': '{type: "object", properties: {requestId: { type: "string"},count:{type:
"number"}}}'
didn't find anything wrong, but it just doesn't work, if I do not count and delete the count from the schema it will work well so I'm sure the sql itself is good.
I checked the flink sql it says some of the functions are not supported in DDL, so don't flink support count? I can see from examples that it support SUM very well.
There is sth wrong with your schema,
schema:
- 'requestId VARCHAR'
- 'count BIGINT'

How to handle a failed write behind from the GridCacheWriteBehindStore?

GridCacheWriteBehindStore does not log the object type or its fields when a write to the underlying store fails. We accept that the cache and the underlying store may be out of sync when using write behind, but we NEED to know what failed.
A simple, and likely example is when there is a NOT NULL constraint on a field in a database table and no such check exists in the java layer.
Here is all you see ... Note also that it's only log level warn which also seems wrong.
[WARN ] 2016-11-30 14:23:17.178 [flusher-0-#57%null%]
GridCacheWriteBehindStore - Unable to update underlying store:
o.a.i.cache.store.jdbc.CacheJdbcPojoStore#3a60c416
You're right, exception is ignored there. This will be fixed in the upcoming 1.8: https://issues.apache.org/jira/browse/IGNITE-3770