Fresh install of Apache Directory Studio in macOS fails to start a new LDAP server - apache-directory

I installed Apache Directory Studio version 2.0.0-M17 via this official download link. I launched the software from Launchpad and created a new LDAP server.
However, when I try to start the server, the status changes quickly from Starting to Stopped, and no logs are displayed. How can I debug the problem?
I tried to modify the ApacheDirectoryStudio.ini and uncommented the following lines to config the heap memory:
-Xms1g
-Xmx2g
I relaunched the Eclipse-based studio software, but the situation is the same.
Here are the apacheds.log:
[17:34:12] INFO [org.apache.directory.server.UberjarMain] - Starting the service.
[17:34:14] WARN [org.apache.directory.api.ldap.model.entry.DefaultAttribute] - ERR_13207_VALUE_ALREADY_EXISTS The value '1.3.6.1.4.1.42.2.27.8.5.1' already exists in the attribute (supportedControl)
[17:34:14] WARN [org.apache.directory.api.ldap.model.entry.DefaultAttribute] - ERR_13207_VALUE_ALREADY_EXISTS The value '1.2.840.113556.1.4.841' already exists in the attribute (supportedControl)
[17:34:14] WARN [org.apache.directory.api.ldap.model.entry.DefaultAttribute] - ERR_13207_VALUE_ALREADY_EXISTS The value '1.3.6.1.4.1.4203.1.9.1.2' already exists in the attribute (supportedControl)
[17:34:14] WARN [org.apache.directory.api.ldap.model.entry.DefaultAttribute] - ERR_13207_VALUE_ALREADY_EXISTS The value '1.2.840.113556.1.4.319' already exists in the attribute (supportedControl)
[17:34:14] WARN [org.apache.directory.api.ldap.model.entry.DefaultAttribute] - ERR_13207_VALUE_ALREADY_EXISTS The value '1.2.840.113556.1.4.528' already exists in the attribute (supportedControl)
[17:34:14] WARN [org.apache.directory.server.core.DefaultDirectoryService] - You didn't change the admin password of directory service instance 'default'. Please update the admin password as soon as possible to prevent a possible security breach.
[09:32:44] INFO [org.apache.directory.server.UberjarMain] - Starting the service.
[09:32:46] WARN [org.apache.directory.api.ldap.model.entry.DefaultAttribute] - ERR_13207_VALUE_ALREADY_EXISTS The value '1.3.6.1.4.1.42.2.27.8.5.1' already exists in the attribute (supportedControl)
[09:32:46] WARN [org.apache.directory.api.ldap.model.entry.DefaultAttribute] - ERR_13207_VALUE_ALREADY_EXISTS The value '1.2.840.113556.1.4.841' already exists in the attribute (supportedControl)
[09:32:46] WARN [org.apache.directory.api.ldap.model.entry.DefaultAttribute] - ERR_13207_VALUE_ALREADY_EXISTS The value '1.3.6.1.4.1.4203.1.9.1.2' already exists in the attribute (supportedControl)
[09:32:46] WARN [org.apache.directory.api.ldap.model.entry.DefaultAttribute] - ERR_13207_VALUE_ALREADY_EXISTS The value '1.2.840.113556.1.4.319' already exists in the attribute (supportedControl)
[09:32:46] WARN [org.apache.directory.api.ldap.model.entry.DefaultAttribute] - ERR_13207_VALUE_ALREADY_EXISTS The value '1.2.840.113556.1.4.528' already exists in the attribute (supportedControl)
[09:32:47] WARN [org.apache.directory.api.ldap.model.entry.Value] - MSG_13202_AT_IS_NULL ()
[09:32:47] WARN [org.apache.directory.api.ldap.model.entry.Value] - MSG_13202_AT_IS_NULL ()
[09:32:47] WARN [org.apache.directory.api.ldap.model.entry.Value] - MSG_13202_AT_IS_NULL ()
[09:32:47] WARN [org.apache.directory.api.ldap.model.entry.Value] - MSG_13202_AT_IS_NULL ()
[09:32:47] WARN [org.apache.directory.api.ldap.model.entry.Value] - MSG_13202_AT_IS_NULL ()
[09:32:47] WARN [org.apache.directory.api.ldap.model.entry.Value] - MSG_13202_AT_IS_NULL ()
[09:32:47] WARN [org.apache.directory.api.ldap.model.entry.Value] - MSG_13202_AT_IS_NULL ()
[09:32:47] WARN [org.apache.directory.api.ldap.model.entry.Value] - MSG_13202_AT_IS_NULL ()
[09:32:47] WARN [org.apache.directory.api.ldap.model.entry.Value] - MSG_13202_AT_IS_NULL ()
[09:32:47] WARN [org.apache.directory.api.ldap.model.entry.Value] - MSG_13202_AT_IS_NULL ()
[09:32:47] WARN [org.apache.directory.api.ldap.model.entry.Value] - MSG_13202_AT_IS_NULL ()
[09:32:47] WARN [org.apache.directory.api.ldap.model.entry.Value] - MSG_13202_AT_IS_NULL ()
[09:32:47] WARN [org.apache.directory.server.core.DefaultDirectoryService] - You didn't change the admin password of directory service instance 'default'. Please update the admin password as soon as possible to prevent a possible security breach.
Operating System: macOS 10.15.7

Related

Hive3.1 metastore process generates a large number of s3 accesses resulting in additional overhead

We store the hdfs data in s3 in the production environment.
After upgrading the hive version from 1.2 to 3.1, we found that the number of visits to s3 increased rapidly, which resulted in some additional expenses.
The following figure shows the access statistics of the s3 bucket. When hive is running, there are still a large number of s3 access requests even if no external application is used. When hive is shut down, the access volume returns to normal immediately.
Check the hive metastore log when all upper-level applications of hive are closed, and find that the log grows rapidly, and there are a large number of accesses to the hive table (s3 bucket)
The following code is part of get_table access. I analyzed the full amount of logs and found that it is a traversal scan of all tables in hive (thousands of tables), so a large number of s3 access requests are generated.
2023-02-06T16:46:42,899 INFO [pool-6-thread-3]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(952)) - 3: source:**.***.*.*** get_table : tbl=hive.ods.ods_wkfl_act_ru_event_subscr
2023-02-06T16:46:42,899 INFO [pool-6-thread-3]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(360)) - ugi=hive ip=**.***.*.*** cmd=source:**.***.*.*** get_table :tbl=hive.ods.ods_wkfl_act_ru_event_subscr
2023-02-06T16:46:42,907 INFO [pool-6-thread-3]: metastore.MetastoreDefaultTransformer (MetastoreDefaultTransformer.java:transform(96)) - Starting translation for processor HMSClient-#master1 on list 1
2023-02-06T16:46:42,907 INFO [pool-6-thread-3]: metastore.MetastoreDefaultTransformer (MetastoreDefaultTransformer.java:transform(115)) - Table ods_wkfl_act_ru_event_subscr,#bucket=-1,isBucketed:false,tableType=EXTERNAL_TABLE,tableCapabilities=null
2023-02-06T16:46:42,908 INFO [pool-6-thread-3]: metastore.MetastoreDefaultTransformer (MetastoreDefaultTransformer.java:transform(438)) - Transformer return list of 1
2023-02-06T16:46:42,908 INFO [pool-6-thread-3]: authorization.StorageBasedAuthorizationProvider (StorageBasedAuthorizationProvider.java:userHasProxyPrivilege(172)) - userhive has host proxy privilege.
2023-02-06T16:46:42,908 INFO [pool-6-thread-3]: authorization.StorageBasedAuthorizationProvider (StorageBasedAuthorizationProvider.java:checkPermissions(395)) - Path authorization is skipped for path s3a://**/apps/hive/warehouse/ods_qa/data/wkfl_act_ru_event_subscr20230202.
2023-02-06T16:46:42,940 INFO [pool-6-thread-3]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(952)) - 3: source:**.***.*.*** get_table : tbl=hive.ods.ods_wkfl_act_ru_event_subscr
2023-02-06T16:46:42,940 INFO [pool-6-thread-3]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(360)) - ugi=hive ip=**.***.*.*** cmd=source:**.***.*.*** get_table :tbl=hive.ods.ods_wkfl_act_ru_event_subscr
2023-02-06T16:46:42,948 INFO [pool-6-thread-3]: metastore.MetastoreDefaultTransformer (MetastoreDefaultTransformer.java:transform(96)) - Starting translation for processor HMSClient-#master1 on list 1
I compared the configurations of hive1.2.1 and hive3, found their differences, and tried to modify the value of hive.compactor.initiator.on to false, but this did not solve the problem
I compared the difference between the metastore logs of hive1.2 and hive3.1, and found that get_all_databases is used in hive1.2, which means that only one s3 access will be generated in one scan cycle. For specific logs, see the following code
2023-02-06 16:49:57,578 INFO [pool-5-thread-200]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(317)) - ugi=ambari-qa ip=**.**.**.** cmd=source:**.**.**.** get_all_databases
2023-02-06 16:49:57,661 WARN [pool-5-thread-200]: conf.HiveConf (HiveConf.java:initialize(3093)) - HiveConf of name hive.log.dir does not exist
2023-02-06 16:49:57,661 WARN [pool-5-thread-200]: conf.HiveConf (HiveConf.java:initialize(3093)) - HiveConf of name hive.driver.parallel.compilation does not exist
2023-02-06 16:49:57,661 WARN [pool-5-thread-200]: conf.HiveConf (HiveConf.java:initialize(3093)) - HiveConf of name hive.log.file does not exist
2023-02-06 16:49:57,661 INFO [pool-5-thread-200]: metastore.HiveMetaStore (HiveMetaStore.java:newRawStoreForConf(619)) - 200: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
2023-02-06 16:49:57,787 INFO [pool-5-thread-200]: metastore.ObjectStore (ObjectStore.java:initializeHelper(383)) - ObjectStore, initialize called
2023-02-06 16:49:57,871 WARN [pool-5-thread-200]: conf.HiveConf (HiveConf.java:initialize(3093)) - HiveConf of name hive.log.dir does not exist
2023-02-06 16:49:57,871 WARN [pool-5-thread-200]: conf.HiveConf (HiveConf.java:initialize(3093)) - HiveConf of name hive.driver.parallel.compilation does not exist
2023-02-06 16:49:57,872 WARN [pool-5-thread-200]: conf.HiveConf (HiveConf.java:initialize(3093)) - HiveConf of name hive.log.file does not exist
2023-02-06 16:49:57,893 INFO [pool-5-thread-200]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:<init>(163)) - Using direct SQL, underlying DB is MYSQL
2023-02-06 16:49:57,893 INFO [pool-5-thread-200]: metastore.ObjectStore (ObjectStore.java:setConf(297)) - Initialized ObjectStore
2023-02-06 16:52:54,140 INFO [pool-5-thread-200]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(773)) - 200: source:**.**.**.** get_all_functions
2023-02-06 16:52:54,140 INFO [pool-5-thread-200]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(317)) - ugi=ambari-qa ip=**.**.**.** cmd=source:**.**.**.** get_all_functions
2023-02-06 16:52:56,155 INFO [pool-5-thread-200]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(773)) - 200: Shutting down the object store...
2023-02-06 16:52:56,155 INFO [pool-5-thread-200]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(317)) - ugi=ambari-qa ip=**.**.**.** cmd=Shutting down the object store...
After the above analysis, I think it is due to the surge of s3 access caused by this problem
Now what I want to know is which hive parameters I can modify, or take another way, so that the hive metastore scans no longer traverses the entire table, but uses the get_all_databases method. I see few similar answers on the Internet. Now I am not sure whether my thinking is correct, I really need your help, thank you very much!

Can't Install Netlify cms plugin Gatsby

i tried to install netlifycms to my gatsby-config.js but its show error unreachable. how i can fix it?
my github repo https://github.com/muhammadizzuddin/portofolio/blob/main/gatsby-config.js
ERROR
There was a problem loading plugin "gatsby-plugin-netlify-cms". Perhaps you need to install its package?
Use --verbose to see actual error.
ERROR
Failed to resolve gatsby-plugin-netlify-cms unreachable
Error: unreachable
please help me.
ERROR
There was a problem loading plugin "gatsby-plugin-netlify-cms". Perhaps you need to install its package?
Use --verbose to see actual error.
ERROR
Failed to resolve gatsby-plugin-netlify-cms unreachable
Error: unreachable
- load.ts:144 resolvePlugin
[portofolio]/[gatsby]/src/bootstrap/load-plugins/load.ts:144:11
- index.js:37 resolveTheme
[portofolio]/[gatsby]/src/bootstrap/load-themes/index.js:37:29
- index.js:115
[portofolio]/[gatsby]/src/bootstrap/load-themes/index.js:115:30
- util.js:16 tryCatcher
[portofolio]/[bluebird]/js/release/util.js:16:23
- reduce.js:166 Object.gotValue
[portofolio]/[bluebird]/js/release/reduce.js:166:18
- reduce.js:155 Object.gotAccum
[portofolio]/[bluebird]/js/release/reduce.js:155:25
- util.js:16 Object.tryCatcher
[portofolio]/[bluebird]/js/release/util.js:16:23
- promise.js:547 Promise._settlePromiseFromHandler
[portofolio]/[bluebird]/js/release/promise.js:547:31
- promise.js:604 Promise._settlePromise
[portofolio]/[bluebird]/js/release/promise.js:604:18
- promise.js:649 Promise._settlePromise0
[portofolio]/[bluebird]/js/release/promise.js:649:10
- promise.js:729 Promise._settlePromises
[portofolio]/[bluebird]/js/release/promise.js:729:18
- async.js:93 _drainQueueStep
[portofolio]/[bluebird]/js/release/async.js:93:12
- async.js:86 _drainQueue
[portofolio]/[bluebird]/js/release/async.js:86:9
- async.js:102 Async._drainQueues
[portofolio]/[bluebird]/js/release/async.js:102:5
- async.js:15 Immediate.Async.drainQueues [as _onImmediate]
[portofolio]/[bluebird]/js/release/async.js:15:14
- timers.js:462 processImmediate
internal/timers.js:462:21
not finished open and validate gatsby-configs - 0.591s
The link is broken so I've assumed that the working one is https://github.com/muhammadizzuddin/my-blog.
You just need to add the gatsby-plugin-netlify-cms to your package.json by running
npm i gatsby-plugin-netlify-cms
This will update your package.json file and will notify Netlify which packages needs to be installed when deploying.

Webpack error when deploying Rails 6 app on elastic beanstalk with npm, yarn and webpack

I tried to deploy my Rails 6 app that's running stable on production (!) into a second new environment on elastic beanstalk (eb) and I cannot get it to run despite just copying the configuration from the first setup.
After researching all resources I could find for 2 days, I'm currently stuck with the following error compilation failed: webpack not installed:
-------------------------------------
/var/log/eb-activity.log
-------------------------------------
+++ export RUBY_VERSION=2.6.5
+++ RUBY_VERSION=2.6.5
+++ export GEM_ROOT=/opt/rubies/ruby-2.6.5/lib/ruby/gems/2.6.0
+++ GEM_ROOT=/opt/rubies/ruby-2.6.5/lib/ruby/gems/2.6.0
++ (( 0 != 0 ))
+ /opt/elasticbeanstalk/support/scripts/check-for-gem.rb puma
+ echo true
[2020-01-09T20:22:02.966Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/AppDeployPreHook] : Completed activity. Result:
Successfully execute hooks in directory /opt/elasticbeanstalk/hooks/appdeploy/pre.
[2020-01-09T20:22:02.966Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild] : Starting activity...
[2020-01-09T20:22:10.381Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild] : Starting activity...
[2020-01-09T20:22:10.462Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_XXX] : Starting activity...
[2020-01-09T20:22:23.059Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_XXX/Command 01_restart_nginx] : Starting activity...
[2020-01-09T20:22:24.293Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_XXX/Command 01_restart_nginx] : Completed activity. Result:
nginx: [warn] conflicting server name "_" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
Stopping nginx: [ OK ]
Starting nginx: nginx: [warn] conflicting server name "_" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored
[ OK ]
[2020-01-09T20:22:24.293Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_XXX] : Completed activity.
[2020-01-09T20:22:24.375Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_1_XXX] : Starting activity...
[2020-01-09T20:22:37.045Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_1_XXX/Command 10_install_webpack] : Starting activity...
[2020-01-09T20:25:12.155Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_1_XXX/Command 10_install_webpack] : Completed activity. Result:
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents#1.2.11 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents#1.2.11: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
+ webpack#4.41.5
added 322 packages from 197 contributors and audited 4227 packages in 144.209s
3 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
[2020-01-09T20:25:13.256Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_1_XXX/Command 11_precompile] : Starting activity...
[2020-01-09T20:30:24.349Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_1_XXX/Command 11_precompile] : Activity execution failed, because: yarn install v1.21.1
warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
[1/4] Resolving packages...
[2/4] Fetching packages...
info fsevents#1.2.9: The platform "linux" is incompatible with this module.
info "fsevents#1.2.9" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Saved lockfile.
Done in 162.47s.
yarn install v1.21.1
warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
[1/4] Resolving packages...
[2/4] Fetching packages...
info fsevents#1.2.9: The platform "linux" is incompatible with this module.
info "fsevents#1.2.9" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
[4/4] Building fresh packages...
Done in 83.51s.
I, [2020-01-09T20:29:36.608192 #2970] INFO -- : Writing /var/app/ondeck/public/assets/manifest-cadda289ef9c70eaa0879a36e6263cb33f7523a16b3ef862e0b8609cdc2bdab1.js
I, [2020-01-09T20:29:36.609243 #2970] INFO -- : Writing /var/app/ondeck/public/assets/manifest-cadda289ef9c70eaa0879a36e6263cb33f7523a16b3ef862e0b8609cdc2bdab1.js.gz
I, [2020-01-09T20:29:36.609422 #2970] INFO -- : Writing /var/app/ondeck/public/assets/XXX-c0be3de839559053bb0a9486d5645ccba7a7452f6ef0370ee498e1fa59e364b2.png
I, [2020-01-09T20:29:36.609592 #2970] INFO -- : Writing /var/app/ondeck/public/assets/XXX-469a46bb9645a42d499c7f74ee69ffad4176e08c4373b6fe67a418e8289f3d83.png
I, [2020-01-09T20:29:36.609775 #2970] INFO -- : Writing /var/app/ondeck/public/assets/XXX-e57001fd85a8e1f4da2bb4bbd309a7d880a9d18e3447d743190ff9befb86413f.jpg
I, [2020-01-09T20:29:36.610062 #2970] INFO -- : Writing /var/app/ondeck/public/assets/application-e18be23bdc9236e71700193c31376705b918eab0738fdd68ef83e572da76c13d.css
I, [2020-01-09T20:29:36.610155 #2970] INFO -- : Writing /var/app/ondeck/public/assets/application-e18be23bdc9236e71700193c31376705b918eab0738fdd68ef83e572da76c13d.css.gz
I, [2020-01-09T20:29:36.610249 #2970] INFO -- : Writing /var/app/ondeck/public/assets/wall_street-e57001fd85a8e1f4da2bb4bbd309a7d880a9d18e3447d743190ff9befb86413f.jpg
I, [2020-01-09T20:29:36.610500 #2970] INFO -- : Writing /var/app/ondeck/public/assets/asset-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css
I, [2020-01-09T20:29:36.610582 #2970] INFO -- : Writing /var/app/ondeck/public/assets/asset-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css.gz
I, [2020-01-09T20:29:36.690087 #2970] INFO -- : Writing /var/app/ondeck/public/assets/XXX-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css
I, [2020-01-09T20:29:36.690356 #2970] INFO -- : Writing /var/app/ondeck/public/assets/XXX-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css.gz
I, [2020-01-09T20:29:36.690743 #2970] INFO -- : Writing /var/app/ondeck/public/assets/main-e18be23bdc9236e71700193c31376705b918eab0738fdd68ef83e572da76c13d.css
I, [2020-01-09T20:29:36.690957 #2970] INFO -- : Writing /var/app/ondeck/public/assets/main-e18be23bdc9236e71700193c31376705b918eab0738fdd68ef83e572da76c13d.css.gz
I, [2020-01-09T20:29:36.691548 #2970] INFO -- : Writing /var/app/ondeck/public/assets/round-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css
I, [2020-01-09T20:29:36.772944 #2970] INFO -- : Writing /var/app/ondeck/public/assets/round-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css.gz
Compiling...
Compilation failed:
webpack not installed
Install webpack to start bundling: [32m
$ npm install --save-dev webpack
(ElasticBeanstalk::ExternalInvocationError)
[2020-01-09T20:30:24.349Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_1_XXX/Command 11_precompile] : Activity failed.
[2020-01-09T20:30:24.349Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_1_XXX] : Activity failed.
[2020-01-09T20:30:24.349Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild] : Activity failed.
[2020-01-09T20:30:25.008Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0/EbExtensionPostBuild] : Activity failed.
[2020-01-09T20:30:25.008Z] INFO [1508] - [Application update app-f4fb-200109_211455#11/AppDeployStage0] : Activity failed.
[2020-01-09T20:30:25.009Z] INFO [1508] - [Application update app-f4fb-200109_211455#11] : Completed activity. Result:
Application update - Command CMD-AppDeploy failed
[2020-01-09T20:36:59.175Z] INFO [5013] - [CMD-TailLogs] : Starting activity...
[2020-01-09T20:36:59.175Z] INFO [5013] - [CMD-TailLogs/AddonsBefore] : Starting activity...
[2020-01-09T20:36:59.175Z] INFO [5013] - [CMD-TailLogs/AddonsBefore] : Completed activity.
[2020-01-09T20:36:59.175Z] INFO [5013] - [CMD-TailLogs/TailLogs] : Starting activity...
[2020-01-09T20:36:59.175Z] INFO [5013] - [CMD-TailLogs/TailLogs/TailLogs] : Starting activity...
-------------------------------------
/var/log/eb-commandprocessor.log
-------------------------------------
[2020-01-09T20:15:53.363Z] DEBUG [1508] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||_API||_Commands..
[2020-01-09T20:15:53.364Z] INFO [1508] : Found enabled addons: ["logpublish", "logstreaming"].
[2020-01-09T20:15:53.444Z] INFO [1508] : Updating Command definition of addon logpublish.
[2020-01-09T20:15:53.444Z] INFO [1508] : Updating Command definition of addon logstreaming.
[2020-01-09T20:15:53.444Z] DEBUG [1508] : Retrieving metadata for key: AWS::CloudFormation::Init||Infra-WriteApplication2||files..
[2020-01-09T20:15:53.445Z] DEBUG [1508] : Retrieving metadata for key: AWS::ElasticBeanstalk::Ext||ManifestFileS3Key..
[2020-01-09T20:15:56.071Z] INFO [1508] : Loading manifest from bucket 'elasticbeanstalk-eu-west-1-511205900284' using computed S3 key 'resources/environments/e-zcsy2pgich/_runtime/versions/manifest_1578600905237'.
[2020-01-09T20:15:56.520Z] INFO [1508] : Updated manifest cache: deployment ID 11 and serial 18.
[2020-01-09T20:15:56.520Z] DEBUG [1508] : Loaded definition of Command CMD-AppDeploy.
[2020-01-09T20:15:56.520Z] INFO [1508] : Executing Application update
[2020-01-09T20:15:56.520Z] INFO [1508] : Executing command: CMD-AppDeploy...
[2020-01-09T20:15:56.520Z] INFO [1508] : Executing command CMD-AppDeploy activities...
[2020-01-09T20:15:56.520Z] DEBUG [1508] : Setting environment variables..
[2020-01-09T20:15:56.520Z] INFO [1508] : Running AddonsBefore for command CMD-AppDeploy...
[2020-01-09T20:15:59.632Z] DEBUG [1508] : Running stages of Command CMD-AppDeploy from stage 0 to stage 1...
[2020-01-09T20:15:59.632Z] INFO [1508] : Running stage 0 of command CMD-AppDeploy...
[2020-01-09T20:15:59.632Z] INFO [1508] : Running leader election...
[2020-01-09T20:16:07.059Z] INFO [1508] : Instance is Leader.
[2020-01-09T20:16:07.059Z] DEBUG [1508] : Loaded 5 actions for stage 0.
[2020-01-09T20:16:07.059Z] INFO [1508] : Running 1 of 5 actions: DownloadSourceBundle...
[2020-01-09T20:16:14.038Z] INFO [1508] : Running 2 of 5 actions: EbExtensionPreBuild...
[2020-01-09T20:18:12.566Z] INFO [1508] : Running 3 of 5 actions: AppDeployPreHook...
[2020-01-09T20:22:02.966Z] INFO [1508] : Running 4 of 5 actions: EbExtensionPostBuild...
[2020-01-09T20:30:25.008Z] ERROR [1508] : Command execution failed: Activity failed. (ElasticBeanstalk::ActivityFatalError)
caused by: yarn install v1.21.1
warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
[1/4] Resolving packages...
[2/4] Fetching packages...
info fsevents#1.2.9: The platform "linux" is incompatible with this module.
info "fsevents#1.2.9" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
[4/4] Building fresh packages...
success Saved lockfile.
Done in 162.47s.
yarn install v1.21.1
warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
[1/4] Resolving packages...
[2/4] Fetching packages...
info fsevents#1.2.9: The platform "linux" is incompatible with this module.
info "fsevents#1.2.9" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
[4/4] Building fresh packages...
Done in 83.51s.
I, [2020-01-09T20:29:36.608192 #2970] INFO -- : Writing /var/app/ondeck/public/assets/manifest-cadda289ef9c70eaa0879a36e6263cb33f7523a16b3ef862e0b8609cdc2bdab1.js
I, [2020-01-09T20:29:36.609243 #2970] INFO -- : Writing /var/app/ondeck/public/assets/manifest-cadda289ef9c70eaa0879a36e6263cb33f7523a16b3ef862e0b8609cdc2bdab1.js.gz
I, [2020-01-09T20:29:36.609422 #2970] INFO -- : Writing /var/app/ondeck/public/assets/XXX-c0be3de839559053bb0a9486d5645ccba7a7452f6ef0370ee498e1fa59e364b2.png
I, [2020-01-09T20:29:36.609592 #2970] INFO -- : Writing /var/app/ondeck/public/assets/XXX-469a46bb9645a42d499c7f74ee69ffad4176e08c4373b6fe67a418e8289f3d83.png
I, [2020-01-09T20:29:36.609775 #2970] INFO -- : Writing /var/app/ondeck/public/assets/XXX-e57001fd85a8e1f4da2bb4bbd309a7d880a9d18e3447d743190ff9befb86413f.jpg
I, [2020-01-09T20:29:36.610062 #2970] INFO -- : Writing /var/app/ondeck/public/assets/application-e18be23bdc9236e71700193c31376705b918eab0738fdd68ef83e572da76c13d.css
I, [2020-01-09T20:29:36.610155 #2970] INFO -- : Writing /var/app/ondeck/public/assets/application-e18be23bdc9236e71700193c31376705b918eab0738fdd68ef83e572da76c13d.css.gz
I, [2020-01-09T20:29:36.610249 #2970] INFO -- : Writing /var/app/ondeck/public/assets/XXX-e57001fd85a8e1f4da2bb4bbd309a7d880a9d18e3447d743190ff9befb86413f.jpg
I, [2020-01-09T20:29:36.610500 #2970] INFO -- : Writing /var/app/ondeck/public/assets/asset-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css
I, [2020-01-09T20:29:36.610582 #2970] INFO -- : Writing /var/app/ondeck/public/assets/asset-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css.gz
I, [2020-01-09T20:29:36.690087 #2970] INFO -- : Writing /var/app/ondeck/public/assets/XXX-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css
I, [2020-01-09T20:29:36.690356 #2970] INFO -- : Writing /var/app/ondeck/public/assets/XXX-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css.gz
I, [2020-01-09T20:29:36.690743 #2970] INFO -- : Writing /var/app/ondeck/public/assets/main-e18be23bdc9236e71700193c31376705b918eab0738fdd68ef83e572da76c13d.css
I, [2020-01-09T20:29:36.690957 #2970] INFO -- : Writing /var/app/ondeck/public/assets/main-e18be23bdc9236e71700193c31376705b918eab0738fdd68ef83e572da76c13d.css.gz
I, [2020-01-09T20:29:36.691548 #2970] INFO -- : Writing /var/app/ondeck/public/assets/round-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css
I, [2020-01-09T20:29:36.772944 #2970] INFO -- : Writing /var/app/ondeck/public/assets/round-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css.gz
Compiling...
Compilation failed:
webpack not installed
Install webpack to start bundling: [32m
$ npm install --save-dev webpack
(ElasticBeanstalk::ExternalInvocationError)
[2020-01-09T20:30:25.009Z] ERROR [1508] : Command CMD-AppDeploy failed!
[2020-01-09T20:30:25.086Z] INFO [1508] : Command processor returning results:
{"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"(TRUNCATED)...NFO -- : Writing /var/app/ondeck/public/assets/round-d0ff5974b6aa52cf562bea5921840c032a860a91a3512f7fe8f768f6bbe005f6.css.gz\nCompiling...\nCompilation failed:\n\nwebpack not installed\n\nInstall webpack to start bundling: \u001b[32m\n$ npm install --save-dev webpack. \ncontainer_command 11_precompile in .ebextensions/yarn.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI","returncode":1,"events":[]}],"truncated":"true"}
Logically, I tried to install webpack again and again which should be normally already be done through the ebextensions file yarn.config:
commands:
# 01_remove_clean_and_install_latest_nodejs:
# run this command from /tmp directory
# cwd: /tmp
# test: '[ -f /usr/bin/node ] && echo "remove previous node"'
# command: 'sudo yum remove -y nodejs | sudo rm /etc/yum.repos.d/nodesource*'
02_node_get:
# run this command from /tmp directory
cwd: /tmp
# flag -y for no-interaction installation
command: 'sudo curl --silent --location https://rpm.nodesource.com/setup_13.x | sudo bash -'
03_node_install:
# run this command from /tmp directory
cwd: /tmp
command: 'sudo yum -y install nodejs'
04_yarn_get:
# run this command from /tmp directory
cwd: /tmp
# don't run the command if yarn is already installed (file /usr/bin/yarn exists)
test: '[ ! -f /usr/bin/yarn ] && echo "yarn not installed"'
command: 'sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo'
05_yarn_install:
# run this command from /tmp directory
cwd: /tmp
test: '[ ! -f /usr/bin/yarn ] && echo "yarn not installed"'
command: 'sudo yum -y install yarn'
06_mkdir_webapp_dir:
command: mkdir /home/webapp
ignoreErrors: true
07_chown_webapp_dir:
command: chown webapp:webapp /home/webapp
ignoreErrors: true
08_chmod_webapp_dir:
command: chmod 700 /home/webapp
ignoreErrors: true
09_update_bundler:
command: gem update bundler
ignoreErrors: true
container_commands:
10_install_webpack:
command: "sudo npm install --save-dev webpack"
11_precompile:
command: "bundle exec rake assets:precompile"
My other settings are RAILS_SKIP_ASSET_COMPILATION=true in eb
I tried installing webpack manually after ssh-ing into the machine via npm and yarn, with and without sudo, but nothing worked...
Each eb deploy takes roughly 20 mins and then times out into a "severe" state with the above error.
Edit
For some weird reason I got it to work now by running bundle exec rails webpacker:install in addition to having it in the Gemfile. Still not comfortable with the setup, but at least it's working for now
Update: We had two elasticsearch gems that were not being installed during bundle install but removing them from our gemfile allowed the deployment to go through. We noticed the issue locally as well with a bundle install that hung on "resolving dependencies" for about 4 hours, which is roughly how long it took our deployment to successfully go through. I'd start checking gems.
I'm not sure if this is related, but we have an open case with AWS support currently over a similar issue where deployments seemingly fail after hanging on some of AWS's own PreDeploy hook scripts (namely one that checks for Puma). We have been unable to successfully clone or deploy known working app code bundles to any new or existing instances. The issue started on the 7th of January, 2019.
AWS has since confirmed that the issue is on their end, but we have no remedy yet.
Sidenote: Code eventually will deploy after about 4 hours, but the instances all show degraded or error.

Onlyoffice integrate into nextcloud. Error while downloading the document file to be converted

I had already looked around, but this couldn't solve my problem. I installed onlyoffice documents on another server.now i would like to use the addon in nextcloud. When I enter the serverip in Nextcloud, I get the following error:
Error while downloading the document file to be converted
in the nextcloud config i also have " 'onlyoffice' =>array (
verify_peer_off' => TRUE,
)
" added.
Called up via healthcheck I get a positive result.
here is an excerpt from the log of the converter:
[2019-08-29T16:29:49.962] [WARN] nodeJS - worker 11687 started.
[2019-08-29T16:29:49.963] [WARN] nodeJS - update cluster with 1 workers
[2019-08-29T16:40:12.293] [ERROR] nodeJS - error downloadFile:url=https://next.mydomain.xx/apps/onlyoffice/empty?doc=eyJ0eXAiOiJxyzv4oPYyTYdvdZNgMz$
Error: Parse Error
at TLSSocket.socketOnData (_http_client.js:454:20)
at emitOne (events.js:116:13)
at TLSSocket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at TLSSocket.Readable.push (_stream_readable.js:208:10)
at TLSWrap.onread (net.js:601:20)
i am very happy to suggest solutions
The reason is that next.mydomain.xx cannot be validated by DocumentServer.
You can disable certificate verification in DS config
/etc/onlyoffice/documentserver/default.json by setting rejectUnauthorized to false. After that, you need to restart DS services: supervisorctl restart all
If that doesn't help, specify the version, OS and installation type of the DocumentServer.

Node.js/ webpack/ getaddrinfo looking for internet when not needed, why?

I've got an issue with a machine that's supposed to be able to run offline.
I can pull the cable after my application is running, but during unplugged start I get the following error:
May 6 23:04:50 myco serve[4121]: (node:4121) UnhandledPromiseRejectionWarning: Error: getaddrinfo EAI_AGAIN registry.npmjs.org:443
May 6 23:04:50 myco serve[4121]: at Object._errnoException (util.js:1022:11)
May 6 23:04:50 myco serve[4121]: at errnoException (dns.js:55:15)
May 6 23:04:50 myco serve[4121]: at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26)
May 6 23:04:50 myco serve[4121]: (node:4121) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
May 6 23:04:50 myco serve[4121]: (node:4121) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
It appears that dns.js is part of webpack/node-libs-browser but thats as far as I can seem to figure out.... I can't find GetAddrInfoReqWrap anywhere in my source tree, or getaddrinfo for that matter. Searching around there's a lot of info with people getting similar errors when deliberately trying to use npm, but thats not what I'm doing. I should have everything I need already on the machine.
Apparently this is a problem with serve.
Serve >6.5.3 apparently has an issue where it tries to contact the registry. Downgrade of serve to 6.5.3 solves this particular issue.
this is documented in https://github.com/zeit/serve/issues/348
not sure why serve would need to contact the registry.
noafterglow is right, but just for reference, rolling back to version 6.5.3 of serve can be accomplished with
npm install -g serve#6.5.3
Source: this post
I ran into exactly this error when trying to serve a VueJs app from a different VM from where the code was developed originally.
The file vue.config.js read :
module.exports = {
devServer: {
host: 'tstvm01',
port: 3030,
},
};
When served on the original machine the start up output is :
App running at:
- Local: http://tstvm01:3030/
- Network: http://tstvm01:3030/
Using the same settings on a VM tstvm07 got me a very similar error to the one the OP describes:
INFO Starting development server...
10% building modules 1/1 modules 0 activeevents.js:183
throw er; // Unhandled 'error' event
^
Error: getaddrinfo EAI_AGAIN
at Object._errnoException (util.js:1022:11)
at errnoException (dns.js:55:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:92:26)
If it ain't already obvious, changing vue.config.js to read ...
module.exports = {
devServer: {
host: 'tstvm07',
port: 3030,
},
};
... solved the problem.