I've just moved up to Quarkus 2.11.1.Final from 2.6.2.Final and my native image is now failing to start up with the error:
Info
2022-07-29 15:53:36.323 BST2022-07-29 14:53:36,275 WARN [io.qua.ope.run.tra.LateBoundSampler] (vert.x-eventloop-thread-0) No Sampler delegate specified, no action taken.
Info
2022-07-29 15:53:36.361 BST2022-07-29 14:53:36,361 INFO [io.qua.sma.ope.run.OpenApiRecorder] (main) Default CORS properties will be used, please use 'quarkus.http.cors' properties instead
Info
2022-07-29 15:53:36.398 BST2022-07-29 14:53:36,398 INFO [liq.database] (main) Set default schema name to public
Info
2022-07-29 15:53:36.432 BST2022-07-29 14:53:36,432 ERROR [io.qua.run.Application] (main) Failed to start application (with profile prod): java.io.IOException: Found 2 files with the path 'db/changelog/liquibase-changelog-master.yml':
Info
2022-07-29 15:53:36.432 BST - resource:/db/changelog/liquibase-changelog-master.yml
Info
2022-07-29 15:53:36.432 BST - resource:/db/changelog/liquibase-changelog-master.yml#1
Info
2022-07-29 15:53:36.432 BST Search Path:
I tried altering quarkus.liquibase.change-log to something very specific just in case it was picking a file of the same name from some 3rd party, but it doesn't make any difference.
Could this be a bug, or could I have missed something in uprevving Quarkus?
Answer posted as a comment by #wabrit
in older versions of Quarkus it was necessary to set quarkus.native.resources.include to ensure that additional liquibase change files were present in the native image. That seems to be done automatically now so including them in that property resulted in duplicates
Related
I am using our Enterprise's Splunk forawarder which seems to be logging events in splunk like this which makes reading splunk logs a bit difficult.
{"log":"[https-jsse-nio-8443-exec-5] 19 Jan 2021 15:30:57,237+0000 UTC INFO rdt.damien.services.CaseServiceImpl CaseServiceImpl :: showCase :: Case Created \n","stream":"stdout","time":"2021-01-19T15:30:57.24005568Z"}
However, there are different Orgs in our Sibling Enterprise who log splunks thus which is far more readable. (No relation between us and them in tech so not able to leverage their tech support to triage this)
[http-nio-8443-exec-7] 15 Jan 2021 21:08:49,511+0000 INFO DaoOImpl [{applicationSystemCode=dao-app, userId=ANONYMOUS, webAnalyticsCorrelationId=|}]: This is a sample log
Please note the difference in logs (mine vs other):
{"log":"[https-jsse-nio-8443-exec-5]..
vs
[http-nio-8443-exec-7]...
Our Enterprise team is struggling to determine what causes this. I checked my app.log which looks ok (logged using Log4J) and doesn't have the aforementioned {"log" :...} entry.
[https-jsse-nio-8443-exec-5] 19 Jan 2021 15:30:57,237+0000 UTC INFO
rdt.damien.services.CaseServiceImpl CaseServiceImpl:: showCase :: Case
Created
Could someone guide me as to where could the problem/configuration lie that is causing the Splunk Forwarder to send the logs with the {"log":... format to splunk? I thought it was something to do with JSON type vs RAW which I too dont understand if its the cause and if it is - what configs are driving that?
Over the course of investigation - I found that is not SPLUNK thats doing this but rather the docker container. The docker container defaults to json-file that writes the outputs to the /var/lib/docker/containers folder with the **-json postfix which contains the logs in the `{"log" : <EVENT NAME} format.
I need to figure out how to fix the docker logging (aka the docker logging driver) to write in a non-json format.
my presto version is 0.240
my operation: i want to use ssl for use https in presto
so i change my config refer only by this url: https://trino.io/docs/current/security/internal-communication.html
but i can't Access to the presto address https://192.168.100.142:9999/
I don't know which step I did wrong.
What should I do to implement HTTPS for Presto?
this is my config:
A cluster of two machines
node 1 142 hostname:sbider-dev-01
/opt/presto-server-0.240/etc/config.properties
coordinator=true
node-scheduler.include-coordinator=true
query.max-memory=7.5GB
query.max-memory-per-node=3.5GB
query.max-total-memory-per-node=3.5GB
experimental.reserved-pool-enabled=false
memory.heap-headroom-per-node=0.5GB
#experimental.spill-enabled=true
#experimental.max-spill-per-node=8GB
#experimental.query-max-spill-per-node=8GB
query.low-memory-killer.policy=total-reservation-on-blocked-nodes
#http-server.http.port=9999
#discovery-server.enabled=true
#discovery.uri=http://192.168.100.142:9999
internal-communication.shared-secret="8HRJWX41DwtuYZcNw8uMbshA8wDLoLS78tT3UVL+Z+m0xG7KCygGurE9SXEbGy2bLtPLza1MhAnWJp2mJp/S+j9EFWWuztXz7cHJhSz9QFiVxYCs1Wzn+IVKgHD5z+iGbdKjwRtgUjwNvS4MIfqwqwKlVZiEtGgEDv7j/kAgpOYPvFCRJfb/U/+b7qPpwPNDA6kXu3Dj5p1Q81+kmbFO59WSh6c4QwqdbFHAaY8XFWo8tIogxpmwQQqV3BvICmesxlIhBH/pOGgoyl86QQ/TaAMaWjaddNcgO5keTGhhOj/juGZ/gbOL/PHGNs1ENSPRnjvIGLHFQPDrm36YenhfTH5L7X0Q9HwwnEpEoYkDJsmMEV+elPZK767nZXHryuvDvHGs0PhYSRO8ekOgC3CaE1tfiGh5M9H5C2fnyeGRQ0iwtgXh83kRDuPzVrRx5yj2cHQJOZu+CcXCJ3aa1Tijxq56RfdcEz9Frr8n8aXaNMtRlchcXn3+B4biByS9duq28VHHBDlyYQQ6VSKbLDt1GBi5oOQICtrGuOY+/MD+rnV5uxPUQcSIh9KmA1WjahJEz0ItDKpB66JgVkTrVDWEJPeozKTvHRLG9sBudRhQ5abJGEAhx9b78dUbTcEkRlPuvUN1WjwVlUzjyUDKd14ocuhpoOBzjV9kFhTqQZ4zgNo="
http-server.http.enabled=false
#node.internal-address-source=FQDN
node.internal-address=sbider-dev-01,sbider-dev-02
http-server.https.enabled=true
http-server.https.port=9999
# jks文件全路径
http-server.https.keystore.path=/ceshi/keystore.jks
http-server.https.keystore.key=123456
discovery.uri=https://192.168.100.142:9999
internal-communication.https.required=true
internal-communication.https.keystore.path=/ceshi/keystore.jks
internal-communication.https.keystore.key=123456
node 2 143 hostname cat /opt/presto-server-0.240/etc/config.properties
coordinator=flase
query.max-memory=7.5GB
query.max-memory-per-node=3.5GB
query.max-total-memory-per-node=3.5GB
experimental.reserved-pool-enabled=false
memory.heap-headroom-per-node=0.5GB
#experimental.spill-enabled=true
#experimental.max-spill-per-node=8GB
#experimental.query-max-spill-per-node=8GB
query.low-memory-killer.policy=total-reservation-on-blocked-nodes
#discovery.uri=http://192.168.100.142:9999
internal-communication.shared-secret="8HRJWX41DwtuYZcNw8uMbshA8wDLoLS78tT3UVL+Z+m0xG7KCygGurE9SXEbGy2bLtPLza1MhAnWJp2mJp/S+j9EFWWuztXz7cHJhSz9QFiVxYCs1Wzn+IVKgHD5z+iGbdKjwRtgUjwNvS4MIfqwqwKlVZiEtGgEDv7j/kAgpOYPvFCRJfb/U/+b7qPpwPNDA6kXu3Dj5p1Q81+kmbFO59WSh6c4QwqdbFHAaY8XFWo8tIogxpmwQQqV3BvICmesxlIhBH/pOGgoyl86QQ/TaAMaWjaddNcgO5keTGhhOj/juGZ/gbOL/PHGNs1ENSPRnjvIGLHFQPDrm36YenhfTH5L7X0Q9HwwnEpEoYkDJsmMEV+elPZK767nZXHryuvDvHGs0PhYSRO8ekOgC3CaE1tfiGh5M9H5C2fnyeGRQ0iwtgXh83kRDuPzVrRx5yj2cHQJOZu+CcXCJ3aa1Tijxq56RfdcEz9Frr8n8aXaNMtRlchcXn3+B4biByS9duq28VHHBDlyYQQ6VSKbLDt1GBi5oOQICtrGuOY+/MD+rnV5uxPUQcSIh9KmA1WjahJEz0ItDKpB66JgVkTrVDWEJPeozKTvHRLG9sBudRhQ5abJGEAhx9b78dUbTcEkRlPuvUN1WjwVlUzjyUDKd14ocuhpoOBzjV9kFhTqQZ4zgNo="
http-server.http.enabled=false
#node.internal-address-source=FQDN
node.internal-address=sbider-dev-01,sbider-dev-02
http-server.https.enabled=true
http-server.https.port=9999
http-server.https.keystore.path=/ceshi/keystore.jks
http-server.https.keystore.key=123456
discovery.uri=https://192.168.100.142:9999
internal-communication.https.required=true
internal-communication.https.keystore.path=/ceshi/keystore.jks
internal-communication.https.keystore.key=123456
server log in sbider-dev-01: cat /opt/presto-server-0.240/var/log/server.log
Companion catalogs: catalog_name1=catalog_name2,catalog_name3=catalog_name4,...
2021-01-12T12:41:09.766+0800 INFO main Bootstrap transaction.idle-check-interval 1.00m 1.00m Time interval between idle transactions checks
2021-01-12T12:41:09.766+0800 INFO main Bootstrap transaction.idle-timeout 5.00m 5.00m Amount of time before an inactive transaction is considered expired
2021-01-12T12:41:09.767+0800 INFO main Bootstrap transaction.max-finishing-concurrency 1 1 Maximum parallelism for committing or aborting a transaction
2021-01-12T12:41:09.767+0800 WARN main Bootstrap UNUSED PROPERTIES
2021-01-12T12:41:09.767+0800 WARN main Bootstrap internal-communication.shared-secret
2021-01-12T12:41:09.767+0800 WARN main Bootstrap
2021-01-12T12:41:11.037+0800 ERROR main com.facebook.presto.server.PrestoServer Unable to create injector, see the following errors:
1) Configuration property 'internal-communication.shared-secret' was not used
at com.facebook.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:238)
1 error
com.google.inject.CreationException: Unable to create injector, see the following errors:
1) Configuration property 'internal-communication.shared-secret' was not used
at com.facebook.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:238)
1 error
at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:543)
at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:159)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:106)
at com.google.inject.Guice.createInjector(Guice.java:87)
at com.facebook.airlift.bootstrap.Bootstrap.initialize(Bootstrap.java:245)
at com.facebook.presto.server.PrestoServer.run(PrestoServer.java:131)
at com.facebook.presto.server.PrestoServer.main(PrestoServer.java:77)
You're following Trino (fka Presto SQL) documentation for securing internal documentation, but got Presto binary from facebook's fork of the project (prestodb).
Go to https://trino.io/download.html to get latest Trino release.
The alternative solution (using prestodb's documentation and prestodb's binary) is NOT a safe, viable alternative, due to security issues known and not fixed in prestodb code base.
I'm using Tivoli Directory Integrator (TDI) to sync users from Domino LDAP to the local DB2 people database of HCL Connections. On a test installation, I got the following error when trying to initially sync the users:
[root#cnx65 tdisol]# LANG=en_US.utf8 ./sync_all_dns.sh
create synchronization lock
log4j:WARN No appenders could be found for logger (server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
**********
CLFRN1275I: Begin to hash records in database.
CLFRN1269I: Finish hash records in database.
**********
"message": "CLFRN1254E: An error occurred while performing findEntry: {0}."
"exception": "com.ibm.lconn.profiles.api.tdi.service.TDIException: CLFRN1254E: An error occurred while performing findEntry: {0}."
Synchronize of Database Repository failed
HCLs documentation recommend to check the logs in case of CLFRN1254E. The file logs/SyncUpdates.log contains the following exception:
2020-01-21 07:50:03,803 INFO [org.apache.log4j.DailyRollingFileAppender.7431103d-4d0a-4d63-bdb7-61e274f23ed4] - CTGDIS092I Use entry provided at runtime as work entry (first pass only).
2020-01-21 07:50:11,723 ERROR [org.apache.log4j.DailyRollingFileAppender.7431103d-4d0a-4d63-bdb7-61e274f23ed4] - [hash_db_entries] CTGDIS181E Error while evaluating the hook 'Function error' in the component 'hash_db_entries (hash_db_entries.functioncall_fail).
com.ibm.lconn.profiles.api.tdi.service.TDIException: CLFRN1254E: An error occurred while executing findEntry: {0}.
at com.ibm.lconn.profiles.api.tdi.connectors.ProfileConnector$ProfileCodeBlock.handleRecoverable(ProfileConnector.java:1063)
at com.ibm.lconn.profiles.api.tdi.connectors.Util.TDICodeRunner.run(TDICodeRunner.java:41)
at com.ibm.lconn.profiles.api.tdi.connectors.ProfileConnector.getNextEntry(ProfileConnector.java:155)
at com.ibm.di.server.AssemblyLineComponent.executeOperation(AssemblyLineComponent.java:3370)
at com.ibm.di.server.AssemblyLineComponent.getnext(AssemblyLineComponent.java:932)
at com.ibm.di.server.AssemblyLine.msGetNextIteratorEntry(AssemblyLine.java:3689)
at com.ibm.di.server.AssemblyLine.executeMainStep(AssemblyLine.java:3388)
at com.ibm.di.server.AssemblyLine.executeMainLoop(AssemblyLine.java:3000)
at com.ibm.di.server.AssemblyLine.executeMainLoop(AssemblyLine.java:2983)
at com.ibm.di.server.AssemblyLine.executeAL(AssemblyLine.java:2952)
at com.ibm.di.server.AssemblyLine.run(AssemblyLine.java:1319)
Caused by: org.springframework.jdbc.BadSqlGrammarException: SqlMapClient operation; bad SQL grammar []; nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred while applying a parameter map.
--- Check the TDIProfile.get-InlineParameterMap.
--- Check the statement (query failed).
--- Cause: com.ibm.db2.jcc.c.SqlException: DB2 SQL error: SQLCODE: -551, SQLSTATE: 42501, SQLERRMC: LCUSER;SELECT;EMPINST.EMPLOYEE
at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:97)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:212)
at org.springframework.orm.ibatis.SqlMapClientTemplate.executeWithListResult(SqlMapClientTemplate.java:249)
at org.springframework.orm.ibatis.SqlMapClientTemplate.queryForList(SqlMapClientTemplate.java:296)
at com.ibm.lconn.profiles.internal.service.store.sqlmapdao.TDIProfileSqlMapDao.get(TDIProfileSqlMapDao.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88)
What could be the problem? How could I find out more information why this error occurs?
What I already tried
Increase log level
In profiles_tdi.properties I enabled debug logs for every component:
debug_collect=true
debug_draft=true
debug_fill_codes=true
debug_managers=true
debug_photos=true
debug_pronounce=true
debug_special=true
debug_update_profile=true
trace_profile_tdi_javascript=on
Since this had no effect, I set the log4j level to debug for the entire application in etc/log4j.properties:
log3j.rootCategory=DEBUG, Default
Also tried ALL instead of DEBUG. However, there is no change in the output. I expected to see the SQL query, which caused the exception.
Set mode in properties
According to this post, the mode attribute will be used to decide if an user is internal or external. Since the example config says
Actually, any string other than "external" is interpreted as employee.
it is set to mode=memberType. Also tried mode=uid and mode=mail. Both are fields containing a string not equal to "external", so this should result in all members imported as internal users.
Sync single users
Since my LDAP filter applies to around 60 users, I ran ./collect_dns.sh successfully and removed all users from collect.dns file except my own. Then sync the user from the dn file with ./populate_from_dn_file.sh. This was done for two other users, resulting always in the same error:
CLFRN0027I: After operation, success records is 0, duplicate records 0, failure records is 1, and last successful entry is null.
CLFRN1280I: 20200121105123 Iterations total number: 1.
The only difference is that logs/PopulateDBFromDNFile.log contains more detailled information about the fetched attributes, mappings and so on. Unfortunately, it doesn't really help me in terms of the error, since it produces a similiar message:
2020-01-21 10:55:27,530 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] [setup_if_lookup] CTGDIS126I Return false.
2020-01-21 10:55:27,530 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] [setup_if_lookup] CTGDIS123I Returned object class java.lang.Boolean.
2020-01-21 10:55:27,530 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS075I Trying to exit TaskCallBlock.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS076I Succeeded exiting TaskCallBlock.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS057I Hook after_functioncall not enabled.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] CTGDIS352I Use null Behavior for $_already_lookup_manager.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] CTGDIS351I Map Attribute $manager_uid [1].
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] CTGDIS353I Script is: conn["$manager_uid"]
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] CTGDIS352I Use null Behavior for $manager_uid.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS057I Hook functioncall_ok not enabled.
2020-01-21 10:55:27,531 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [add_manager_data] CTGDIS057I Hook default_ok not enabled.
2020-01-21 10:55:27,538 INFO [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] Result: <My Name of the User in dn file>
2020-01-21 10:55:27,591 ERROR [com.ibm.di.log.FileRollerAppender.268b5e1d-d0fc-4a7c-9e12-4d742c44faa5] - [callSyncDB_mod] [ProfileConnector] SqlMapClient operation; bad SQL grammar []; nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred while applying a parameter map.
--- Check the TDIProfile.get-InlineParameterMap.
--- Check the statement (query failed).
--- Cause: com.ibm.db2.jcc.c.SqlException: DB2 SQL error: SQLCODE: -551, SQLSTATE: 42501, SQLERRMC: LCUSER;SELECT;EMPINST.EMPLOYEE
Found out that this was a unlucky logical mistake from me. The database is created using sql files, shipped with the Connections Installation Wizard. I automatically import them in a loop. Since it was very slow (about 30 min for all scripts), I tried to parallelize them by adding a & at the end of the command and finally wait at the end to make sure all scripts were executed.
- name: Check and create non existing DBs for CNX
become: yes
become_user: "{{ db2.instance.name }}"
shell: |
db={{ item.name }}
scripts=({{ item.files | join(' ') }})
existing_dbs=$(echo -e '{{ existing_dbs.stdout }}')
echo "Check db ${db}"
if ! echo ${existing_dbs} | grep -q ${db}; then
echo "DB ${db} doesn't exist, execute scripts"
for script in "${scripts[#]}"
do
echo "${db}: Execute script ${script}"
{{ db2.target }}/bin/db2 -td# -f {{ cnx_sql_dir }}/${script} &
done
wait
fi
register: db_check
changed_when: "'execute scripts' in db_check.stdout"
loop: "{{ cnx.db_scripts }}"
cnx.db_scripts is a mapping of database names to SQL files:
db_scripts:
- name: PEOPLEDB
files:
- profiles/db2/createDb.sql
- profiles/db2/appGrants.sql
- name: FORUM
files:
# - ...
In retrospect, this was a terrible logical mistake because I missed the fact that those scripts rely on each other: When profiles/db2/appGrants.sql is executed before profiles/db2/createDb.sql was finished, it wouldn't work because the db doesn't exists.
As a result, TDIs queries failed because the database and tables were only partly created. I didn't notice this immediately, since the machine was several re-deployed during of the Ansible playbook development. Strangely, TDI only failed in 2 of 10 deployments. Seems like DB2 make some kind of queue and depending on the timing, the people database required for TDI is created successfully on some runs.
I installed some days ago Pentaho BA (full install) on a Red Hat system. I installed version 5.0.7.1-x64 because the latest 5.1 version gave me problems just after the installation. The 5.0.7.1 version still have some problems (exceptions in logs) but it was working fine, until today. Today I logged in into the PUC and I found that there are some severe issues in the functionalities of the server: after logging in I get only the upper menu and not the content of the home page. I can access and see the "Browse Files" menu, but when I select a dashboard or a report it doesn't load: it just loads the upper bar like for the home page and the title but the dashboard is completely empty. Not even the "grid" for the various reports.
I copied the content of the log file catalina.out and you can find it here http://pastebin.com/hshXekFM
From line 33 there's the part where the server loads the login pageFrom line 51 it's after the login.
Here you can find the content of the catalina.2014-07-09.log http://pastebin.com/L7ReLdpt
and here the pentaho.log file:
2014-07-09 09:53:53,732 ERROR [org.pentaho.platform.engine.services.solution.SolutionEngine] 2bc1b7a9-073e-11e4-8cb0-005056a82a08:SOLUTION-ENGINE:/public/bi-developers/Secure/global-department-list.xaction: SolutionEngine.ERROR_0007 - Action sequence execution failed
2014-07-09 10:20:56,969 ERROR [org.pentaho.platform.engine.services.solution.SolutionEngine] f39bcc4a-0741-11e4-8cb0-005056a82a08:SOLUTION-ENGINE:/public/bi-developers/rules/session-region-list.xaction: SolutionEngine.ERROR_0007 - Esecuzione dell'Action Sequence fallita
2014-07-09 10:20:57,185 ERROR [org.pentaho.platform.engine.services.solution.SolutionEngine] f3be215b-0741-11e4-8cb0-005056a82a08:SOLUTION-ENGINE:/public/bi-developers/rules/session-region-list.xaction: SolutionEngine.ERROR_0007 - Esecuzione dell'Action Sequence fallita
2014-07-09 10:20:57,224 ERROR [org.pentaho.platform.engine.services.solution.SolutionEngine] f3c3035c-0741-11e4-8cb0-005056a82a08:SOLUTION-ENGINE:/public/bi-developers/rules/session-region-list.xaction: SolutionEngine.ERROR_0007 - Esecuzione dell'Action Sequence fallita
2014-07-09 10:20:57,322 ERROR [org.pentaho.platform.engine.services.solution.SolutionEngine] f3d2bacd-0741-11e4-8cb0-005056a82a08:SOLUTION-ENGINE:/public/bi-developers/rules/session-region-list.xaction: SolutionEngine.ERROR_0007 - Esecuzione dell'Action Sequence fallita
2014-07-09 10:22:17,301 ERROR [org.pentaho.platform.engine.services.solution.SolutionEngine] 23814d9e-0742-11e4-8cb0-005056a82a08:SOLUTION-ENGINE:/public/bi-developers/rules/session-region-list.xaction: SolutionEngine.ERROR_0007 - Action sequence execution failed
2014-07-09 10:22:17,343 ERROR [org.pentaho.platform.engine.services.solution.SolutionEngine] 238656af-0742-11e4-8cb0-005056a82a08:SOLUTION-ENGINE:/public/bi-developers/rules/session-region-list.xaction: SolutionEngine.ERROR_0007 - Action sequence execution failed
2014-07-09 10:22:17,434 ERROR [org.pentaho.platform.engine.services.solution.SolutionEngine] 23941250-0742-11e4-8cb0-005056a82a08:SOLUTION-ENGINE:/public/bi-developers/rules/session-region-list.xaction: SolutionEngine.ERROR_0007 - Action sequence execution failed
2014-07-09 10:22:17,492 ERROR [org.pentaho.platform.scheduler2.quartz.ActionAdapterQuartzJob] ActionAdapterQuartzJob.ERROR_0004 - Action "org.pentaho.platform.plugin.action.builtin.ActionSequenceAction" failed to run as a quartz job
java.lang.Exception: java.io.FileNotFoundException
at org.pentaho.platform.scheduler2.quartz.ActionAdapterQuartzJob.invokeAction(ActionAdapterQuartzJob.java:271)
at org.pentaho.platform.scheduler2.quartz.ActionAdapterQuartzJob.execute(ActionAdapterQuartzJob.java:133)
at org.pentaho.platform.scheduler2.quartz.BlockingQuartzJob.execute(BlockingQuartzJob.java:38)
at org.quartz.core.JobRunShell.run(JobRunShell.java:199)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:546)
Caused by: java.io.FileNotFoundException
at org.pentaho.platform.web.http.api.resources.RepositoryFileStreamProvider.getInputStream(RepositoryFileStreamProvider.java:118)
at org.pentaho.platform.scheduler2.quartz.ActionAdapterQuartzJob$1.call(ActionAdapterQuartzJob.java:176)
at org.pentaho.platform.scheduler2.quartz.ActionAdapterQuartzJob$1.call(ActionAdapterQuartzJob.java:166)
at org.pentaho.platform.engine.security.SecurityHelper.runAsUser(SecurityHelper.java:179)
at org.pentaho.platform.engine.security.SecurityHelper.runAsUser(SecurityHelper.java:168)
at org.pentaho.platform.scheduler2.quartz.ActionAdapterQuartzJob.invokeAction(ActionAdapterQuartzJob.java:250)
... 4 more
2014-07-09 10:22:17,502 ERROR [org.pentaho.platform.scheduler2.quartz.ActionAdapterQuartzJob] ActionAdapterQuartzJob.ERROR_0001 - Property "ActionAdapterQuartzJob-ActionClass" or "ActionAdapterQuartzJob-ActionId" must be set in the job data map
2014-07-09 10:22:17,502 ERROR [org.pentaho.platform.scheduler2.quartz.ActionAdapterQuartzJob] ActionAdapterQuartzJob.ERROR_0002 - Failed to create an instance of action "unknown"
org.pentaho.platform.scheduler2.quartz.ActionAdapterQuartzJob$LoggingJobExecutionException: ActionAdapterQuartzJob.ERROR_0001 - Property "ActionAdapterQuartzJob-ActionClass" or "ActionAdapterQuartzJob-ActionId" must be set in the job data map
at org.pentaho.platform.scheduler2.quartz.ActionAdapterQuartzJob.resolveClass(ActionAdapterQuartzJob.java:79)
at org.pentaho.platform.scheduler2.quartz.ActionAdapterQuartzJob.execute(ActionAdapterQuartzJob.java:117)
at org.pentaho.platform.scheduler2.quartz.BlockingQuartzJob.execute(BlockingQuartzJob.java:48)
at org.quartz.core.JobRunShell.run(JobRunShell.java:199)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:546)
How can I solve this?
Thank you very much in advance
Just edit the file sessionStartupActions.xml and comment out the lines that are calling those .xaction files.
That happens when it tries to call the actions for the sample data that you probably didn't install.
This is very likely to be Quartz configuration issue.
I would suggest to re-check quartz configuration files - have they been modified by patch or manually since last job/transformation was successfully scheduled?
This may be a start point.
If this is not a solution don't hesistate to fire a bug - at least developers could provide more sophistaicated error message for this case.
For enterprise users pentaho provides official support - so don't waste time for forums and kick they directly.
Otherwise - quartz is widely used so at least there is some configurations where it is really works without this exception - so the question is to find what is the difference between this one and other.
Hope it will help a little bit. :(
I have mapped entities in playORM and my project was running fine with my entities mapped the way they were. However, after installing playORM 1.4.1, the lastest version released in maven, I got the null pointer bellow.
I want to find the error, but have no clue of where to start looking.
Any hint?
INFO: found meta=User locally
2012-11-09 17:32:22,918 com.alvazan.orm.layer9z.spi.db.cassandra.ColumnFamilyHelper waitForNodesToBeUpToDate
INFO: LOOP until all nodes have same schema version OR timeout in 300000 milliseconds
2012-11-09 17:32:22,939 com.alvazan.orm.layer9z.spi.db.cassandra.ColumnFamilyHelper tryToLoadColumnFamilyImpl
INFO: Well, we did NOT find any column family=User to load in cassandra(from virt=User)
2012-11-09 17:32:22,939 com.alvazan.orm.layer9z.spi.db.cassandra.ColumnFamilyHelper tryToLoadColumnFamilyVirt
INFO: Total time to LOAD column family meta from cassandra=21
java.lang.NullPointerException
at com.alvazan.orm.impl.meta.data.MetaEmbeddedSimple.translateToColumnImpl(MetaEmbeddedSimple.java:105)
at com.alvazan.orm.impl.meta.data.MetaEmbeddedSimple.translateToColumn(MetaEmbeddedSimple.java:93)
at com.alvazan.orm.impl.meta.data.MetaClassSingle.translateToRow(MetaClassSingle.java:82)
at com.alvazan.orm.layer0.base.BaseEntityManagerImpl.putImpl(BaseEntityManagerImpl.java:102)
at com.alvazan.orm.layer0.base.BaseEntityManagerImpl.put(BaseEntityManagerImpl.java:68)
at com.s1mbi0se.dmp.da.dao.UserDao.insertOrUpdateUser(UserDao.java:23)
at com.s1mbi0se.dmp.module.UserModule.persistData(UserModule.java:116)
at com.s1mbi0se.dmp.processor.mapred.SelectorReducer.reduce(SelectorReducer.java:60)
at com.s1mbi0se.dmp.processor.mapred.SelectorReducer.reduce(SelectorReducer.java:1)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)
17:32:22,946 WARN Thread-3 mapred.LocalJobRunner:298 - job_local_0001
java.lang.InterruptedException
at com.s1mbi0se.dmp.processor.mapred.SelectorReducer.reduce(SelectorReducer.java:63)
at com.s1mbi0se.dmp.processor.mapred.SelectorReducer.reduce(SelectorReducer.java:1)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)
2012-11-09 17:32:27,237 com.s1mbi0se.dmp.processor.main.DmpProcessorRunner run
EDIT: This is fixed in master branch and soon to be released. 11/27/12
The log formatting seems a bit off but this is the important part
java.lang.NullPointerException at com.alvazan.orm.impl.meta.data.MetaEmbeddedSimple.translateToColumnImpl(MetaEmbeddedSimple.java:105)
line 105 finds this code...
for(T val : toBeAdded) {
byte[] name = formTheName(val);
Column c = new Column();
c.setName(name);
row.getColumns().add(c);
}
specifically line 105 is the first line so toBeAdded is null for some reason....looking at who called this method.
hmmm, it turns out ONE of your entities has a null list of something. We need to add code in here so if your entity has a null list we create an empty one instead. Can you file a ticket and link to this URL. We can fix this one easily.
NOTE: I have a habit of every entity with a field like so
private List something;
I 100% always define it like this
private List something = new ArrayList();
That avoids Nullpointers all over the place which is why I missed this one :( :(.....anyways, we will fix to allow this null lists.
thanks,
Dean
This is fixed in release 1.4.2 which is available in Maven repo