For a test environment, I want to setup HCL Connections 6.5 with OpenLDAP. This should be a more lightweight alternative that could be better automated than a full Domino server, which is used in production. I created test users with the following attributes:
{ sn: Max, cn: Muster, uid: max, displayName: "Max Muster", userPassword: "ldap", mail: "max.muster#example.com" }
All have the objectClasses person shadowAccount inetOrgPerson. After executing collect_dns.sh, the following DN is present in collect.dns
uid=max,ou=People,dc=cnx,dc=local
When syncing those users with ./populate_from_dn_file.sh I got a failed record. The log file logs/ibmdi.log shows
2020-05-21 09:41:07,703 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] - Eagerly caching bean 'PostgreSQL' to allow for resolving potential circular references
2020-05-21 09:41:07,703 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] - Finished creating instance of bean 'PostgreSQL'
2020-05-21 09:41:07,703 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] - Creating shared instance of singleton bean 'Sybase'
2020-05-21 09:41:07,704 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] - Creating instance of bean 'Sybase'
2020-05-21 09:41:07,704 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] - Eagerly caching bean 'Sybase' to allow for resolving potential circular references
2020-05-21 09:41:07,704 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] - Finished creating instance of bean 'Sybase'
2020-05-21 09:41:07,704 INFO [org.springframework.jdbc.support.SQLErrorCodesFactory] - SQLErrorCodes loaded: [DB2, Derby, H2, HSQL, Informix, MS-SQL, MySQL, Oracle, PostgreSQL, Sybase]
2020-05-21 09:41:07,704 DEBUG [org.springframework.jdbc.support.SQLErrorCodesFactory] - Looking up default SQLErrorCodes for DataSource [org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy#64a644f9]
2020-05-21 09:41:07,705 DEBUG [org.springframework.jdbc.datasource.DataSourceUtils] - Fetching JDBC Connection from DataSource
2020-05-21 09:41:07,705 DEBUG [org.springframework.jdbc.datasource.DataSourceUtils] - Registering transaction synchronization for JDBC Connection
2020-05-21 09:41:07,706 DEBUG [org.springframework.jdbc.support.SQLErrorCodesFactory] - Database product name cached for DataSource [org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy#64a644f9]: name is 'DB2/LINUXX8664'
2020-05-21 09:41:07,706 DEBUG [org.springframework.jdbc.support.SQLErrorCodesFactory] - SQL error codes for 'DB2/LINUXX8664' found
2020-05-21 09:41:07,706 DEBUG [org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator] - Translating SQLException with SQL state '23502', error code '-407', message [
--- The error occurred while applying a parameter map.
--- Check the Profile.createProfile-InlineParameterMap.
--- Check the statement (update failed).
--- Cause: com.ibm.db2.jcc.c.SqlException: DB2 SQL error: SQLCODE: -407, SQLSTATE: 23502, SQLERRMC: TBSPACEID=5, TABLEID=5, COLNO=7]; SQL was [] for task [SqlMapClient operation]
2020-05-21 09:41:07,707 DEBUG [org.springframework.jdbc.datasource.DataSourceUtils] - Returning JDBC Connection to DataSource
2020-05-21 09:41:07,707 DEBUG [org.springframework.jdbc.datasource.DataSourceTransactionManager] - Initiating transaction rollback
2020-05-21 09:41:07,707 DEBUG [org.springframework.jdbc.datasource.DataSourceTransactionManager] - Rolling back JDBC transaction on Connection [org.apache.commons.dbcp.PoolableConnection#a2d822e9]
2020-05-21 09:41:07,707 DEBUG [org.springframework.jdbc.datasource.DataSourceTransactionManager] - Releasing JDBC Connection [org.apache.commons.dbcp.PoolableConnection#a2d822e9] after transaction
2020-05-21 09:41:07,707 DEBUG [org.springframework.jdbc.datasource.DataSourceUtils] - Returning JDBC Connection to DataSource
2020-05-21 09:41:07,707 ERROR [com.ibm.lconn.profiles.api.tdi.connectors.ProfileConnector] - CLFRN1254E: An error occurred while performing findEntry: max.
2020-05-21 09:41:07,708 ERROR [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - !com.ibm.lconn.profiles.api.tdi.service.TDIException: CLFRN1254E: An error occurred while performing findEntry: max.!
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - [callSyncDB_mod] CTGDIS274I Skipping entry from [addorUpdateDB], CTGDIS393I Throwing this exception to tell the AssemblyLine to skip the current Entry. If used in an EventHandler, this exception tells the EventHandler to skip the remaining actions..
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - [callSyncDB_mod] CTGDIS075I Trying to exit TaskCallBlock.
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - [callSyncDB_mod] CTGDIS076I Succeeded exiting TaskCallBlock.
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - [callSyncDB_mod] CTGDIS057I Hook after_functioncall not enabled.
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - CTGDIS352I Use null Behavior for outputResult.
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - [callSyncDB_mod] CTGDIS504I *Result of attribute mapping*
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - [callSyncDB_mod] CTGDIS505I The 'conn' object
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - [callSyncDB_mod] CTGDIS003I *** Start dumping Entry
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - Operation: generic
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - Entry attributes:
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - displayName (replace): 'Max Muster'
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - $lookup_status (replace): 'success'
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - userPassword (replace): (\6c\64\61\70)
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - $lookup_operation (replace): 'lookup_user'
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - cn (replace): 'Muster'
2020-05-21 09:41:07,708 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - $_already_lookup_secretary (replace):
2020-05-21 09:41:07,709 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - objectClass (replace): 'person' 'shadowAccount' 'inetOrgPerson'
2020-05-21 09:41:07,709 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - entryUUID (replace): 'e74f6eec-2f22-103a-960a-770a291c4e47'
2020-05-21 09:41:07,709 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - $secretary_uid (replace):
2020-05-21 09:41:07,709 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - uid (replace): 'max'
2020-05-21 09:41:07,709 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - $manager_uid (replace):
2020-05-21 09:41:07,709 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - $_already_lookup_manager (replace):
2020-05-21 09:41:07,709 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - syncExisting (replace):
2020-05-21 09:41:07,709 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - $dn (replace): 'uid=max,ou=People,dc=cnx,dc=local'
2020-05-21 09:41:07,709 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - mail (replace): 'max.muster#example.com'
2020-05-21 09:41:07,709 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - sn (replace): 'Max'
2020-05-21 09:41:07,709 INFO [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - $operation (replace): 'add'
How can I fix this? According to the error message, I really have no idea what the problem is.
What I already tried
This blog post has the same error and indicates that we need to set a field mode, which caused the error being set to null. To test if this works, I set in this to a custom function by inserting mode={func_mode} in map_dbrepos_from_source.properties. Additionally, I added those function in profiles_functions.js:
function func_mode(fieldname) {
return 'internal';
}
This should handle all users as internal and avoid trouble because of null fields. With the debug logs, I could verify that this value was applied:
2020-05-21 09:41:07,587 DEBUG [AssemblyLine.AssemblyLines/populate_from_dns_file.1] - CLFRN0011I: Mapping result: mode = internal.
The other thing I tried is diabling validation for fields I don't have in my LDAP like guid or isManager by commenting their validation functions out in validate_dbrepos_fields.properties:
#distinguishedName=(x != null) && (x.length() > 0) && (x.length() <= 256)
#guid=(x != null) && (x.length() > 0) && (x.length() <= 256)
#isManager=(x == null) || (x == "Y") || (x == "N")
#surname=(x != null) && (x.length() > 0) && (x.length() <= 128)
Additionally, the mapping to those fields were set to null to avoid errors by fetching them from an LDAP entry where they doesn't exist
grep "=null" map_dbrepos_from_source.properties
alternateLastname=null
blogUrl=null
bldgId=null
calendarUrl=null
countryCode=null
courtesyTitle=null
deptNumber=null
description=null
employeeNumber=null
employeeTypeCode=null
experience=null
faxNumber=null
freeBusyUrl=null
floor=null
groupwareEmail=null
ipTelephoneNumber=null
jobResp=null
loginId=null
logins=null
managerUid=null
mobileNumber=null
nativeFirstName=null
nativeLastName=null
orgId=null
pagerNumber=null
pagerId=null
pagerServiceProvider=null
pagerType=null
officeName=null
preferredFirstName=null
preferredLanguage=null
preferredLastName=null
profileType=null
secretaryUid=null
shift=null
telephoneNumber=null
tenantKey=null
timezone=null
title=null
workLocationCode=null
isManager=nul
Verify that the DB exists
In the past, I had the same problem and found out that the databases were not created properly. So I checked this:
su - db2inst1
/opt/IBM/db2/V11.1/bin/db2 list db directory | grep "Database name"
Database name = OPNACT
Database name = METRICS
Database name = SNCOMM
Database name = PNS
Database name = WIKIS
Database name = FORUM
Database name = HOMEPAGE
Database name = DOGEAR
Database name = PEOPLEDB
Database name = MOBILE
Database name = FILES
Database name = XCC
Database name = BLOGS
All databases are present. Especially PEOPLEDB, where TDI places the fetched user profiles from LDAP. Also the tables seems there:
db2 => list tables for schema EMPINST#
Table/View Schema Type Creation time
------------------------------- --------------- ----- --------------------------
CHG_EMP_DRAFT EMPINST T 2020-05-20-22.48.28.416187
COUNTRY EMPINST T 2020-05-20-22.48.26.864072
DEPARTMENT EMPINST T 2020-05-20-22.48.26.635113
EMPLOYEE EMPINST T 2020-05-20-22.48.25.249286
EMP_DRAFT EMPINST T 2020-05-20-22.48.28.079615
EMP_ROLE_MAP EMPINST T 2020-05-20-22.48.29.296064
EMP_TYPE EMPINST T 2020-05-20-22.48.26.973100
EMP_UPDATE_TIMESTAMP EMPINST T 2020-05-20-22.48.29.539973
EVENTLOG EMPINST T 2020-05-20-22.48.28.764942
GIVEN_NAME EMPINST T 2020-05-20-22.48.25.723208
ORGANIZATION EMPINST T 2020-05-20-22.48.26.745316
PEOPLE_TAG EMPINST T 2020-05-20-22.48.26.477954
PHOTO EMPINST T 2020-05-20-22.48.27.097088
PHOTOBKUP EMPINST T 2020-05-20-22.48.27.311065
PHOTO_GUID EMPINST T 2020-05-20-22.48.27.519014
PROFILES_SCHEDULER_LMGR EMPINST T 2020-05-20-22.48.30.229810
PROFILES_SCHEDULER_LMPR EMPINST T 2020-05-20-22.48.30.340702
PROFILES_SCHEDULER_TASK EMPINST T 2020-05-20-22.48.29.873149
PROFILES_SCHEDULER_TREG EMPINST T 2020-05-20-22.48.30.108769
PROFILE_EXTENSIONS EMPINST T 2020-05-20-22.48.26.025818
PROFILE_EXT_DRAFT EMPINST T 2020-05-20-22.48.26.258480
PROFILE_LAST_LOGIN EMPINST T 2020-05-20-22.48.29.430376
PROFILE_LOGIN EMPINST T 2020-05-20-22.48.29.051552
PROFILE_PREFS EMPINST T 2020-05-20-22.48.29.183711
PROF_CONNECTIONS EMPINST T 2020-05-20-22.48.28.490983
PROF_CONSTANTS EMPINST T 2020-05-20-22.48.28.644499
PRONUNCIATION EMPINST T 2020-05-20-22.48.27.726899
SNPROF_SCHEMA EMPINST T 2020-05-20-22.48.25.020502
SURNAME EMPINST T 2020-05-20-22.48.25.875498
TENANT EMPINST T 2020-05-20-22.48.25.084242
USER_PLATFORM_EVENTS EMPINST T 2020-05-20-22.48.29.659806
WORKLOC EMPINST T 2020-05-20-22.48.27.953047
This matches the number of tables from the SQL file
$ grep -i "create table" /opt/cnx-install/cnx/wizard/connections.sql/profiles/db2/createDb.sql | wc -l
32
You asked the question in May so I assume this answer comes much too late. For future reference: "Skipping entry from [addorUpdateDB]" is a scripted message which means that the account did not pass the minimal requirements for a Profile entry. If I remember correctly, there are 4 essential fields without which a profile entry can't be created:
email
distinguishedName
guid
uid
Seeing that you left out a guid, the error is logical. You should have mapped your guid to your entryUUID.
Related
I've just moved up to Quarkus 2.11.1.Final from 2.6.2.Final and my native image is now failing to start up with the error:
Info
2022-07-29 15:53:36.323 BST2022-07-29 14:53:36,275 WARN [io.qua.ope.run.tra.LateBoundSampler] (vert.x-eventloop-thread-0) No Sampler delegate specified, no action taken.
Info
2022-07-29 15:53:36.361 BST2022-07-29 14:53:36,361 INFO [io.qua.sma.ope.run.OpenApiRecorder] (main) Default CORS properties will be used, please use 'quarkus.http.cors' properties instead
Info
2022-07-29 15:53:36.398 BST2022-07-29 14:53:36,398 INFO [liq.database] (main) Set default schema name to public
Info
2022-07-29 15:53:36.432 BST2022-07-29 14:53:36,432 ERROR [io.qua.run.Application] (main) Failed to start application (with profile prod): java.io.IOException: Found 2 files with the path 'db/changelog/liquibase-changelog-master.yml':
Info
2022-07-29 15:53:36.432 BST - resource:/db/changelog/liquibase-changelog-master.yml
Info
2022-07-29 15:53:36.432 BST - resource:/db/changelog/liquibase-changelog-master.yml#1
Info
2022-07-29 15:53:36.432 BST Search Path:
I tried altering quarkus.liquibase.change-log to something very specific just in case it was picking a file of the same name from some 3rd party, but it doesn't make any difference.
Could this be a bug, or could I have missed something in uprevving Quarkus?
Answer posted as a comment by #wabrit
in older versions of Quarkus it was necessary to set quarkus.native.resources.include to ensure that additional liquibase change files were present in the native image. That seems to be done automatically now so including them in that property resulted in duplicates
I'm using Tivoli Directory Integrator (TDI) to sync users from Domino LDAP to the local DB2 people database of HCL Connections. On a test installation, I got the following error when trying to initially sync the users:
[root#cnx65 tdisol]# LANG=en_US.utf8 ./sync_all_dns.sh
create synchronization lock
log4j:WARN No appenders could be found for logger (server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
**********
CLFRN1275I: Begin to hash records in database.
CLFRN1269I: Finish hash records in database.
**********
"message": "CLFRN1254E: An error occurred while performing findEntry: {0}."
"exception": "com.ibm.lconn.profiles.api.tdi.service.TDIException: CLFRN1254E: An error occurred while performing findEntry: {0}."
Synchronize of Database Repository failed
HCLs documentation recommend to check the logs in case of CLFRN1254E. The file logs/SyncUpdates.log contains the following exception:
2020-01-21 07:50:03,803 INFO [org.apache.log4j.DailyRollingFileAppender.7431103d-4d0a-4d63-bdb7-61e274f23ed4] - CTGDIS092I Use entry provided at runtime as work entry (first pass only).
2020-01-21 07:50:11,723 ERROR [org.apache.log4j.DailyRollingFileAppender.7431103d-4d0a-4d63-bdb7-61e274f23ed4] - [hash_db_entries] CTGDIS181E Error while evaluating the hook 'Function error' in the component 'hash_db_entries (hash_db_entries.functioncall_fail).
com.ibm.lconn.profiles.api.tdi.service.TDIException: CLFRN1254E: An error occurred while executing findEntry: {0}.
at com.ibm.lconn.profiles.api.tdi.connectors.ProfileConnector$ProfileCodeBlock.handleRecoverable(ProfileConnector.java:1063)
at com.ibm.lconn.profiles.api.tdi.connectors.Util.TDICodeRunner.run(TDICodeRunner.java:41)
at com.ibm.lconn.profiles.api.tdi.connectors.ProfileConnector.getNextEntry(ProfileConnector.java:155)
at com.ibm.di.server.AssemblyLineComponent.executeOperation(AssemblyLineComponent.java:3370)
at com.ibm.di.server.AssemblyLineComponent.getnext(AssemblyLineComponent.java:932)
at com.ibm.di.server.AssemblyLine.msGetNextIteratorEntry(AssemblyLine.java:3689)
at com.ibm.di.server.AssemblyLine.executeMainStep(AssemblyLine.java:3388)
at com.ibm.di.server.AssemblyLine.executeMainLoop(AssemblyLine.java:3000)
at com.ibm.di.server.AssemblyLine.executeMainLoop(AssemblyLine.java:2983)
at com.ibm.di.server.AssemblyLine.executeAL(AssemblyLine.java:2952)
at com.ibm.di.server.AssemblyLine.run(AssemblyLine.java:1319)
Caused by: org.springframework.jdbc.BadSqlGrammarException: SqlMapClient operation; bad SQL grammar []; nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred while applying a parameter map.
--- Check the TDIProfile.get-InlineParameterMap.
--- Check the statement (query failed).
--- Cause: com.ibm.db2.jcc.c.SqlException: DB2 SQL error: SQLCODE: -551, SQLSTATE: 42501, SQLERRMC: LCUSER;SELECT;EMPINST.EMPLOYEE
at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:97)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:212)
at org.springframework.orm.ibatis.SqlMapClientTemplate.executeWithListResult(SqlMapClientTemplate.java:249)
at org.springframework.orm.ibatis.SqlMapClientTemplate.queryForList(SqlMapClientTemplate.java:296)
at com.ibm.lconn.profiles.internal.service.store.sqlmapdao.TDIProfileSqlMapDao.get(TDIProfileSqlMapDao.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88)
What could be the problem? How could I find out more information why this error occurs?
What I already tried
Increase log level
In profiles_tdi.properties I enabled debug logs for every component:
debug_collect=true
debug_draft=true
debug_fill_codes=true
debug_managers=true
debug_photos=true
debug_pronounce=true
debug_special=true
debug_update_profile=true
trace_profile_tdi_javascript=on
Since this had no effect, I set the log4j level to debug for the entire application in etc/log4j.properties:
log3j.rootCategory=DEBUG, Default
Also tried ALL instead of DEBUG. However, there is no change in the output. I expected to see the SQL query, which caused the exception.
Set mode in properties
According to this post, the mode attribute will be used to decide if an user is internal or external. Since the example config says
Actually, any string other than "external" is interpreted as employee.
it is set to mode=memberType. Also tried mode=uid and mode=mail. Both are fields containing a string not equal to "external", so this should result in all members imported as internal users.
Sync single users
Since my LDAP filter applies to around 60 users, I ran ./collect_dns.sh successfully and removed all users from collect.dns file except my own. Then sync the user from the dn file with ./populate_from_dn_file.sh. This was done for two other users, resulting always in the same error:
CLFRN0027I: After operation, success records is 0, duplicate records 0, failure records is 1, and last successful entry is null.
CLFRN1280I: 20200121105123 Iterations total number: 1.
The only difference is that logs/PopulateDBFromDNFile.log contains more detailled information about the fetched attributes, mappings and so on. Unfortunately, it doesn't really help me in terms of the error, since it produces a similiar message:
2020-01-21 10:55:27,530 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] [setup_if_lookup] CTGDIS126I Return false.
2020-01-21 10:55:27,530 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] [setup_if_lookup] CTGDIS123I Returned object class java.lang.Boolean.
2020-01-21 10:55:27,530 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS075I Trying to exit TaskCallBlock.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS076I Succeeded exiting TaskCallBlock.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS057I Hook after_functioncall not enabled.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] CTGDIS352I Use null Behavior for $_already_lookup_manager.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] CTGDIS351I Map Attribute $manager_uid [1].
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] CTGDIS353I Script is: conn["$manager_uid"]
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] CTGDIS352I Use null Behavior for $manager_uid.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS057I Hook functioncall_ok not enabled.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS057I Hook default_ok not enabled.
2020-01-21 10:55:27,538 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] Result: <My Name of the User in dn file>
2020-01-21 10:55:27,591 ERROR [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [ProfileConnector] SqlMapClient operation; bad SQL grammar []; nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred while applying a parameter map.
--- Check the TDIProfile.get-InlineParameterMap.
--- Check the statement (query failed).
--- Cause: com.ibm.db2.jcc.c.SqlException: DB2 SQL error: SQLCODE: -551, SQLSTATE: 42501, SQLERRMC: LCUSER;SELECT;EMPINST.EMPLOYEE
Found out that this was a unlucky logical mistake from me. The database is created using sql files, shipped with the Connections Installation Wizard. I automatically import them in a loop. Since it was very slow (about 30 min for all scripts), I tried to parallelize them by adding a & at the end of the command and finally wait at the end to make sure all scripts were executed.
- name: Check and create non existing DBs for CNX
become: yes
become_user: "{{ db2.instance.name }}"
shell: |
db={{ item.name }}
scripts=({{ item.files | join(' ') }})
existing_dbs=$(echo -e '{{ existing_dbs.stdout }}')
echo "Check db ${db}"
if ! echo ${existing_dbs} | grep -q ${db}; then
echo "DB ${db} doesn't exist, execute scripts"
for script in "${scripts[#]}"
do
echo "${db}: Execute script ${script}"
{{ db2.target }}/bin/db2 -td# -f {{ cnx_sql_dir }}/${script} &
done
wait
fi
register: db_check
changed_when: "'execute scripts' in db_check.stdout"
loop: "{{ cnx.db_scripts }}"
cnx.db_scripts is a mapping of database names to SQL files:
db_scripts:
- name: PEOPLEDB
files:
- profiles/db2/createDb.sql
- profiles/db2/appGrants.sql
- name: FORUM
files:
# - ...
In retrospect, this was a terrible logical mistake because I missed the fact that those scripts rely on each other: When profiles/db2/appGrants.sql is executed before profiles/db2/createDb.sql was finished, it wouldn't work because the db doesn't exists.
As a result, TDIs queries failed because the database and tables were only partly created. I didn't notice this immediately, since the machine was several re-deployed during of the Ansible playbook development. Strangely, TDI only failed in 2 of 10 deployments. Seems like DB2 make some kind of queue and depending on the timing, the people database required for TDI is created successfully on some runs.
I developed my custom connector with dev-kit, my connector act as a source it connect to ejb and extract the data, and send to the another end point.
I am using bitronix for transaction manager.
I used the below code to register my ejb in the mule transaction context.
public static void registerXaResource(MuleContext muleContext) {
EJBClientTransactionContext txContext = EJBClientTransactionContext.create(muleContext.getTransactionManager(),
getSynchronizationRegistry());
EJBClientTransactionContext.setGlobalContext(txContext);
XaResourceProducer.registerXAResource("dummyResource", new DummyXaResource());
}
/**
* #return
*/
private static TransactionSynchronizationRegistry getSynchronizationRegistry() {
return TransactionManagerServices.getTransactionSynchronizationRegistry();
}
After that am using next end point as JMS and configured with XA,always join.
But it not behave as XA.
It looks like bitronix delisting the JMS resource.
2019-12-11 16:59:48,398 [Receiving Thread] DEBUG
bitronix.tm.resource.jms.DualSessionWrapper - choosing XA session
2019-12-11 16:59:48,410 [Receiving Thread] DEBUG bitronix.tm.resource.jms.DualSessionWrapper - looking for producer based on a MessageProducerConsumerKey on ActiveMQQueue[sampleReplyQueue]
2019-12-11 16:59:48,410 [Receiving Thread] DEBUG bitronix.tm.resource.jms.DualSessionWrapper - found no producer based on a MessageProducerConsumerKey on ActiveMQQueue[sampleReplyQueue], creating it
2019-12-11 16:59:48,411 [Receiving Thread] DEBUG bitronix.tm.resource.jms.DualSessionWrapper - choosing XA session
2019-12-11 16:59:48,447 [Receiving Thread] DEBUG bitronix.tm.resource.jms.DualSessionWrapper - closing a DualSessionWrapper in state ACCESSIBLE of a JmsPooledConnection of pool 1605822565-inboundtest-JMS in state ACCESSIBLE with underlying connection org.apache.activemq.artemis.jms.client.ActiveMQXAConnection#207dd1b7
2019-12-11 16:59:48,447 [Receiving Thread] DEBUG bitronix.tm.resource.common.TransactionContextHelper - delisting a DualSessionWrapper in state ACCESSIBLE of a JmsPooledConnection of pool 1605822565-inboundtest-JMS in state ACCESSIBLE with underlying connection org.apache.activemq.artemis.jms.client.ActiveMQXAConnection#207dd1b7 from a Bitronix Transaction with GTRID [31363035383232353635000000002582E13C00000001], status=ACTIVE, 1 resource(s) enlisted (started Thu Jan 08 12:18:54 IST 1970)
2019-12-11 16:59:48,447 [Receiving Thread] DEBUG bitronix.tm.resource.common.TransactionContextHelper - resource is not in enlisting global transaction context: a DualSessionWrapper in state ACCESSIBLE of a JmsPooledConnection of pool 1605822565-inboundtest-JMS in state ACCESSIBLE with underlying connection org.apache.activemq.artemis.jms.client.ActiveMQXAConnection#207dd1b7
2019-12-11 16:59:48,447 [Receiving Thread] DEBUG bitronix.tm.resource.common.TransactionContextHelper - requeuing a DualSessionWrapper in state ACCESSIBLE of a JmsPooledConnection of pool 1605822565-inboundtest-JMS in state ACCESSIBLE with underlying connection org.apache.activemq.artemis.jms.client.ActiveMQXAConnection#207dd1b7 from a Bitronix Transaction with GTRID [31363035383232353635000000002582E13C00000001], status=ACTIVE, 1 resource(s) enlisted (started Thu Jan 08 12:18:54 IST 1970)
2019-12-11 16:59:48,447 [Receiving Thread] DEBUG bitronix.tm.resource.common.TransactionContextHelper - resource is not in enlisting global transaction context: a DualSessionWrapper in state ACCESSIBLE of a JmsPooledConnection of pool 1605822565-inboundtest-JMS in state ACCESSIBLE with underlying connection org.apache.activemq.artemis.jms.client.ActiveMQXAConnection#207dd1b7
As per the logs the JMS not comes under the transaction which i begin.
Or else right way to implement XA Mule custom connector.
Devkit doesn't support transactions. Probably just registering the resource in that way is not enough to fully implement the XA transaction.
The SDK for Mule 4 does support transactions though I understand this is not the version you are interested.
I have the following schematic implementation of a JAX-RS service endpoint:
#GET
#Path("...")
#Transactional
public Response download() {
java.sql.Blob blob = findBlob(...);
return Response.ok(blob.getBinaryStream()).build();
}
Invoking the JAX-RS endpoint will fetch a Blob from the database (through JPA) and stream the result back to the HTTP client. The purpose of using a Blob and a stream instead of e.g. JPA's naive BLOB to byte[] mapping is to prevent that all of the data must be kept in memory, but instead stream directly from the database to the HTTP response.
This works as intended and I actually don't understand why. Isn't the Blob handle I get from the database associated with both the underlying JDBC connection and transaction? If so, I would have expected the Spring transaction to be commited when I return from the download() method, making it impossible for the JAX-RS implementation to later access data from the Blob to stream it back to the HTTP response.
Are you sure that the transaction advice is running? By default, Spring uses the "proxy" advice mode. The transaction advice would only run if you registered the Spring-proxied instance of your resource with the JAX-RS Application, or if you were using "aspectj" weaving instead of the default "proxy" advice mode.
Assuming that a physical transaction is not being re-used as a result of transaction propagation, using #Transactional on this download() method is incorrect in general.
If the transaction advice is actually running, the transaction ends when returning from the download() method. The Blob Javadoc says: "A Blob object is valid for the duration of the transaction in which is was created." However, ยง16.3.7 of the JDBC 4.2 spec says: "Blob, Clob and NClob objects remain valid for at least the duration of the transaction in which they are created." Therefore, the InputStream returned by getBinaryStream() is not guaranteed to be valid for serving the response; the validity would depend on any guarantees provided by the JDBC driver. For maximum portability, you should rely on the Blob being valid only for the duration of the transaction.
Regardless of whether the transaction advice is running, you potentially have a race condition because the underlying JDBC connection used to retrieve the Blob might be re-used in a way that invalidates the Blob.
EDIT: Testing Jersey 2.17, it appears that the behavior of constructing a Response from an InputStream depends on the specified response MIME type. In some cases, the InputStream is read entirely into memory first before the response is sent. In other cases, the InputStream is streamed back.
Here is my test case:
#Path("test")
public class MyResource {
#GET
public Response getIt() {
return Response.ok(new InputStream() {
#Override
public int read() throws IOException {
return 97; // 'a'
}
}).build();
}
}
If the getIt() method is annotated with #Produces(MediaType.TEXT_PLAIN) or no #Produces annotation, then Jersey attempts to read the entire (infinite) InputStream into memory and the application server eventually crashes from running out of memory. If the getIt() method is annotated with #Produces(MediaType.APPLICATION_OCTET_STREAM), then the response is streamed back.
So, your download() method may be working simply because the blob is not being streamed back. Jersey might be reading the entire blob into memory.
Related: How to stream an endless InputStream with JAX-RS
EDIT2: I have created a demonstration project using Spring Boot and Apache CXF:
https://github.com/dtrebbien/so30356840-cxf
If you run the project and execute on the command line:
curl 'http://localhost:8080/myapp/test/data/1' >/dev/null
Then you will see log output like the following:
2015-06-01 15:58:14.573 DEBUG 9362 --- [nio-8080-exec-1] org.apache.cxf.transport.http.Headers : Request Headers: {Accept=[*/*], Content-Type=[null], host=[localhost:8080], user-agent=[curl/7.37.1]}
2015-06-01 15:58:14.584 DEBUG 9362 --- [nio-8080-exec-1] org.apache.cxf.jaxrs.utils.JAXRSUtils : Trying to select a resource class, request path : /test/data/1
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] org.apache.cxf.jaxrs.utils.JAXRSUtils : Trying to select a resource operation on the resource class com.sample.resource.MyResource
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] org.apache.cxf.jaxrs.utils.JAXRSUtils : Resource operation getIt may get selected
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] org.apache.cxf.jaxrs.utils.JAXRSUtils : Resource operation getIt on the resource class com.sample.resource.MyResource has been selected
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.j.interceptor.JAXRSInInterceptor : Request path is: /test/data/1
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.j.interceptor.JAXRSInInterceptor : Request HTTP method is: GET
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.j.interceptor.JAXRSInInterceptor : Request contentType is: */*
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.j.interceptor.JAXRSInInterceptor : Accept contentType is: */*
2015-06-01 15:58:14.585 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.j.interceptor.JAXRSInInterceptor : Found operation: getIt
2015-06-01 15:58:14.595 DEBUG 9362 --- [nio-8080-exec-1] o.s.j.d.DataSourceTransactionManager : Creating new transaction with name [com.sample.resource.MyResource.getIt]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT; ''
2015-06-01 15:58:14.595 DEBUG 9362 --- [nio-8080-exec-1] o.s.j.d.DataSourceTransactionManager : Acquired Connection [ProxyConnection[PooledConnection[org.hsqldb.jdbc.JDBCConnection#7b191894]]] for JDBC transaction
2015-06-01 15:58:14.596 DEBUG 9362 --- [nio-8080-exec-1] o.s.j.d.DataSourceTransactionManager : Switching JDBC Connection [ProxyConnection[PooledConnection[org.hsqldb.jdbc.JDBCConnection#7b191894]]] to manual commit
2015-06-01 15:58:14.602 DEBUG 9362 --- [nio-8080-exec-1] o.s.jdbc.core.JdbcTemplate : Executing prepared SQL query
2015-06-01 15:58:14.603 DEBUG 9362 --- [nio-8080-exec-1] o.s.jdbc.core.JdbcTemplate : Executing prepared SQL statement [SELECT data FROM images WHERE id = ?]
2015-06-01 15:58:14.620 DEBUG 9362 --- [nio-8080-exec-1] o.s.j.d.DataSourceTransactionManager : Initiating transaction commit
2015-06-01 15:58:14.620 DEBUG 9362 --- [nio-8080-exec-1] o.s.j.d.DataSourceTransactionManager : Committing JDBC transaction on Connection [ProxyConnection[PooledConnection[org.hsqldb.jdbc.JDBCConnection#7b191894]]]
2015-06-01 15:58:14.621 DEBUG 9362 --- [nio-8080-exec-1] o.s.j.d.DataSourceTransactionManager : Releasing JDBC Connection [ProxyConnection[PooledConnection[org.hsqldb.jdbc.JDBCConnection#7b191894]]] after transaction
2015-06-01 15:58:14.621 DEBUG 9362 --- [nio-8080-exec-1] o.s.jdbc.datasource.DataSourceUtils : Returning JDBC Connection to DataSource
2015-06-01 15:58:14.621 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Invoking handleMessage on interceptor org.apache.cxf.interceptor.OutgoingChainInterceptor#7eaf4562
2015-06-01 15:58:14.622 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Adding interceptor org.apache.cxf.interceptor.MessageSenderInterceptor#20ffeb47 to phase prepare-send
2015-06-01 15:58:14.622 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Adding interceptor org.apache.cxf.jaxrs.interceptor.JAXRSOutInterceptor#5714d386 to phase marshal
2015-06-01 15:58:14.622 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Chain org.apache.cxf.phase.PhaseInterceptorChain#11ca802c was created. Current flow:
prepare-send [MessageSenderInterceptor]
marshal [JAXRSOutInterceptor]
2015-06-01 15:58:14.623 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Invoking handleMessage on interceptor org.apache.cxf.interceptor.MessageSenderInterceptor#20ffeb47
2015-06-01 15:58:14.623 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Adding interceptor org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor#6129236d to phase prepare-send-ending
2015-06-01 15:58:14.623 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Chain org.apache.cxf.phase.PhaseInterceptorChain#11ca802c was modified. Current flow:
prepare-send [MessageSenderInterceptor]
marshal [JAXRSOutInterceptor]
prepare-send-ending [MessageSenderEndingInterceptor]
2015-06-01 15:58:14.623 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Invoking handleMessage on interceptor org.apache.cxf.jaxrs.interceptor.JAXRSOutInterceptor#5714d386
2015-06-01 15:58:14.627 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.j.interceptor.JAXRSOutInterceptor : Response content type is: application/octet-stream
2015-06-01 15:58:14.631 DEBUG 9362 --- [nio-8080-exec-1] o.apache.cxf.ws.addressing.ContextUtils : retrieving MAPs from context property javax.xml.ws.addressing.context.inbound
2015-06-01 15:58:14.631 DEBUG 9362 --- [nio-8080-exec-1] o.apache.cxf.ws.addressing.ContextUtils : WS-Addressing - failed to retrieve Message Addressing Properties from context
2015-06-01 15:58:14.636 DEBUG 9362 --- [nio-8080-exec-1] o.a.cxf.phase.PhaseInterceptorChain : Invoking handleMessage on interceptor org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor#6129236d
2015-06-01 15:58:14.639 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.t.http.AbstractHTTPDestination : Finished servicing http request on thread: Thread[http-nio-8080-exec-1,5,main]
2015-06-01 15:58:14.639 DEBUG 9362 --- [nio-8080-exec-1] o.a.c.t.servlet.ServletController : Finished servicing http request on thread: Thread[http-nio-8080-exec-1,5,main]
I have trimmed the log output for readability. The important thing to note is that the transaction is committed and the JDBC connection is returned before the response is sent. Therefore, the InputStream returned by blob.getBinaryStream() is not necessarily valid and the getIt() resource method may be invoking undefined behavior.
EDIT3: A recommended practice for using Spring's #Transactional annotation is to annotate the service method (see Spring #Transactional Annotation Best Practice). You could have a service method that finds the blob and transfers the blob data to the response OutputStream. The service method could be annotated with #Transactional so that the transaction in which the Blob is created would remain open for the duration of the transfer. However, it seems to me that this approach could introduce a denial of service vulnerability by way of a "slow read" attack. Because the transaction should be kept open for the duration of the transfer for maximum portability, numerous slow readers could lock up your database table(s) by holding open transactions.
One possible approach is to save the blob to a temporary file and stream back the file. See How do I use Java to read from a file that is actively being written? for some ideas on reading a file while it's being simultaneously written, though this case is more straightforward because the length of the blob can be determined by calling the Blob#length() method.
I've spent some time now debugging the code, and all my assumptions in the question are more or less correct. The #Transactional annotation works as expected, the transaction (both the Spring and the DB transactions) are commited immediately after returning from the download method, the physical DB connection is returned to the connection pool and the content of the BLOB is obviously been read later and streamed to the HTTP response.
The reason why this still works is that the Oracle JDBC driver implements functionality beyond what's required by the JDBC specification. As Daniel pointed out, the JDBC API documentation states that "A Blob object is valid for the duration of the transaction in which is was created." The documentation only states that the Blob is valid during the transaction, it does not state (as claimed by Daniel and initially assumed by me), that the Blob is not valid after ending the transaction.
Using plain JDBC, retrieving the InputStream from two Blobs in two different transactions from the same physical connection and not reading the Blob data before after the transactions are commited demonstrates this behaviour:
Connection conn = DriverManager.getConnection(...);
conn.setAutoCommit(false);
ResultSet rs = conn.createStatement().executeQuery("select data from ...");
rs.next();
InputStream is1 = rs.getBlob(1).getBinaryStream();
rs.close();
conn.commit();
rs = conn.createStatement().executeQuery("select data from ...");
rs.next();
InputStream is2 = rs.getBlob(1).getBinaryStream();
rs.close();
conn.commit();
int b1 = 0, b2 = 0;
while(is1.read()>=0) b1++;
while(is2.read()>=0) b2++;
System.out.println("Read " + b1 + " bytes from 1st blob");
System.out.println("Read " + b2 + " bytes from 2nd blob");
Even if both Blobs have been selected from the same physical connection and from within two different transactions, they can both be read completely.
Closing the JDBC connection (conn.close()) does however finally invalidate the Blob streams.
I had a similar related problem and I can confirm that at least in my situation PostgreSQL throws an exception Invalid large object descriptor : 0 with autocommit when using the StreamingOutput approach. The reason of this is that when the Response from JAX-RS is returned the transaction is committed and the streaming method is executing later. In the meanwhile the file descriptor is not valid anymore.
I have created some helper method, so that the streaming part is opening a new transaction and can stream the Blob. com.foobar.model.Blob is just a return class encapsulating the blob so that not the complete entity must be fetched. findByID is a method using a projection on the blob column and only fetching this column.
So StreamingOutput of JAX-RS and Blob under JPA and Spring transactions are working, but it must be tweaked. The same applied to JPA and EJB, I guess.
// NOTE: has to run inside a transaction to be able to stream from the DB
#Transactional
public void streamBlobToOutputStream(OutputStream outputStream, Class entityClass, String id, SingularAttribute attribute) {
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(outputStream);
try {
com.foobar.model.Blob blob = fooDao.findByID(id, entityClass, com.foobar.model.Blob.class, attribute);
if (blob.getBlob() == null) {
return;
}
InputStream inputStream;
try {
inputStream = blob.getBlob().getBinaryStream();
} catch (SQLException e) {
throw new RuntimeException("Could not read binary data.", e);
}
IOUtils.copy(inputStream, bufferedOutputStream);
// NOTE: the buffer must be flushed without data seems to be missing
bufferedOutputStream.flush();
} catch (Exception e) {
throw new RuntimeException("Could not send data.", e);
}
}
/**
* Builds streaming response for data which can be streamed from a Blob.
*
* #param contentType The content type. If <code>null</code> application/octet-stream is used.
* #param contentDisposition The content disposition. E.g. naming of the file download. Optional.
* #param entityClass The entity class to search in.
* #param id The Id of the entity with the blob field to stream.
* #param attribute The Blob attribute in the entity.
* #return the response builder.
*/
protected Response.ResponseBuilder buildStreamingResponseBuilder(String contentType, String contentDisposition,
Class entityClass, String id, SingularAttribute attribute) {
StreamingOutput streamingOutput = new StreamingOutput() {
#Override
public void write(OutputStream output) throws IOException, WebApplicationException {
streamBlobToOutputStream(output, entityClass, id, attribute);
}
};
MediaType mediaType = MediaType.APPLICATION_OCTET_STREAM_TYPE;
if (contentType != null) {
mediaType = MediaType.valueOf(contentType);
}
Response.ResponseBuilder response = Response.ok(streamingOutput, mediaType);
if (contentDisposition != null) {
response.header("Content-Disposition", contentDisposition);
}
return response;
}
/**
* Stream a blob from the database.
* #param contentType The content type. If <code>null</code> application/octet-stream is used.
* #param contentDisposition The content disposition. E.g. naming of the file download. Optional.
* #param currentBlob The current blob value of the entity.
* #param entityClass The entity class to search in.
* #param id The Id of the entity with the blob field to stream.
* #param attribute The Blob attribute in the entity.
* #return the response.
*/
#Transactional
public Response streamBlob(String contentType, String contentDisposition,
Blob currentBlob, Class entityClass, String id, SingularAttribute attribute) {
if (currentBlob == null) {
return Response.noContent().build();
}
return buildStreamingResponseBuilder(contentType, contentDisposition, entityClass, id, attribute).build();
}
I also have to add to my answer that there might be an issue with the Blob behavior under Hibernate. By default Hibernate is merging the complete entity with the DB, also if only one field was changed, i.e. if you update a field name and also have a large Blob image untouched the image will be updated. Even worse because before the merge if the entity is detached Hibernate has to fetch the Blob from the DB to determine the dirty status. Because blobs cannot be byte wise compared (too large) they are considered immutable and the equal comparison is only based on the object reference of the blob. The fetched object reference from the DB will be a different object reference, so although nothing was changed the blob is updated again. At least this was the situation for me. I have used the annotation #DynamicUpdate at the entity and have written a user type handling the blob in a different way and checking if the must be updated.
Eclipselink multi-tenant: TABLE_PER_TENANT separate schema
<entity class="mypackage.Foo" >
<multitenant type="TABLE_PER_TENANT" >
<tenant-table-discriminator type="SCHEMA" context-property="xxx"/>
</multitenant>
</entity>
Got the following error:
[EL Warning]: 2014-10-23 21:55:47.406--UnitOfWork(32326774)--Exception [EclipseLink-6168] (Eclipse Persistence Services
- 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.QueryException
Exception Description: Query failed to prepare, unexpected error occurred: [java.lang.NullPointerException].
Internal Exception: java.lang.NullPointerException
Query: ReadAllQuery(referenceClass=Foo )
What should the context-property be? How is it be used? Thanks.
I can't still comment so I am sending you this advice as answer. Look at my article Multi tenancy with EclipseLink and inherited entities. Maybe you are experiencing same problems as I had.