DBT - Segmentation Fault - dbt

When I try to use dbt build I'm getting this Segmentation Fault - I can't find a reason why:
Strangely dbt run seems fine. (now I get this error even with run command)
Here's an output with --debug
select *
from fields
);
07:36:17.280015 [debug] [Thread-3 ]: Using snowflake connection "model.hubspot_source.stg_hubspot__engagement_task"
07:36:17.288681 [debug] [Thread-3 ]: On model.hubspot_source.stg_hubspot__engagement_task: /* {"app": "dbt", "dbt_version": "1.1.1", "profile_name": "<profile>", "target_name": "development", "node_id": "model.hubspot_source.stg_hubspot__engagement_task"} */
select completion_date from analytics_dev.dbt_dawid_stg_hubspot.stg_hubspot__engagement_task_tmp where completion_date is not null limit 1
07:36:17.887152 [debug] [Thread-3 ]: SQL status: SUCCESS 1 in 0.6 seconds
Segmentation fault

Related

Error running tez in hive. Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask

Hadoop 3.3.5
Hive 3.1.3
Tez 0.10.2
I follow the instruction in this link to build tez 0.10.2 for hadoop 3.3.5: https://tez.apache.org/install.html
The db is stored on s3 bucket and I am able to run 'select count(*) from m1.t1' using hive.execution.engine=mr.
When I set hive.execution.engine=tez, and run the same query, I got this error immediately:
2023-02-15T21:21:09,208 INFO [a6e2cd1a-b2c9-42d8-9568-8e0b64677f77 main] client.TezClient: App did not succeed. Diagnostics: Application application_1676506240754_0019 failed 2 times due to AM Contai
ner for appattempt_1676506240754_0019_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2023-02-15 21:21:08.730]Exception from container-launch.
Container id: container_1676506240754_0019_02_000001
Exit code: 1
[2023-02-15 21:21:08.732]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster
[2023-02-15 21:21:08.733]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.tez.dag.app.DAGAppMaster
If I set tez.use.cluster.hadoop-libs to true in tez-site.xml, I got YARN running but failed with load aws credential error even I have set the fs.s3a credentials in hadoop's core-site.xml, hive's hive-site.xml and .bashrc environment variables.
keys are masked to show sample only:
echo $AWS_ACCESS_KEY_ID
I9U996400005XXXXXXXX
echo $AWS_SECRET_KEY
mPY8GiU6NegNWoVnaODXXXXXXXXXXXXXXXXXXXX
hive> set hive.execution.engine=tez;
hive> select count(*) from m1.t1;
Query ID = hdp-user_20230215210146_62ed9fab-5d4a-42a9-bf54-5fb6f84a9048
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1676506240754_0015)
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
----------------------------------------------------------------------------------------------
Map 1 container INITIALIZING -1 0 0 -1 0 0
Reducer 2 container INITED 1 0 0 1 0 0
----------------------------------------------------------------------------------------------
VERTICES: 00/02 [>>--------------------------] 0% ELAPSED TIME: 2.03 s
----------------------------------------------------------------------------------------------
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1676506240754_0015_3_00, diagnostics=[Vertex vertex_1676506240754_0015_3_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed, vertex=vertex_1676506240754_0015_3_00 [Map 1], java.nio.file.AccessDeniedException: s3a://hadoop-cluster/warehouse/tablespace/managed/hive/m1.db/t1: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : com.amazonaws.SdkClientException: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
Tried to add all fs.s3a properties from core-site.xml to tez-site.xml and set fs,s3a,access.key and set fs.s3a.secret.key= inside hive session but still get same error.
org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : com.amazonaws.SdkClientException: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
Question: according to tez install instruction
Ensure tez.use.cluster.hadoop-libs is not set in tez-site.xml, or if it is set, the value should be false
But when set to false, tez could not run.
When set to true, I got aws credential error even though I set them in every possible location or environment variables.
==========================================================
Update:
Not sure if this is the right answer to this problem but I finally got it working by adding this property to hive-site.xml
<property>
<name>hive.conf.hidden.list</name>
<value>javax.jdo.option.ConnectionPassword,hive.server2.keystore.password,fs.s3a.proxy.password,dfs.adls.oauth2.credential,fs.adl.oauth2.credential</value>
</property>
Default all fs.s3a credential are hidden config even you don't set this property. I explicitly add this property and remove all fs.s3a credential related from the value.
Now, I can run select count(*) with tez.

Filebeat does not complete on close_eof + --once

Using filebeat 7.5.2:
I'm using a filebeat configuration with close_eof enabled and I run filebeat with the flag --once. I can see the harvester reaching eof but the filebeat keeps going.
Flebeat conf:
filebeat.inputs:
- type: log
close_eof: true
enabled: true
paths:
- "${LOGS_PATH}"
scan_frequency: 1s
fields: {
machine: "${HOST}"
}
output.logstash:
hosts: ["192.168.41.6:5044"]
bulk_max_size: 1024
timeout: 30s
pipelining: 1
workers: 1
And I run it using:
filebeat run --once -v -c "PATH TO CONF..."
And some logs from the filebeat instance:
...
2020-02-04T18:30:16.950Z INFO instance/beat.go:297 Setup Beat: filebeat; Version: 7.5.2
2020-02-04T18:30:17.059Z INFO [publisher] pipeline/module.go:97 Beat name: logstash
2020-02-04T18:30:17.167Z WARN beater/filebeat.go:152 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch out
put is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-02-04T18:30:17.168Z INFO instance/beat.go:429 filebeat start running.
2020-02-04T18:30:17.168Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-02-04T18:30:17.168Z INFO registrar/migrate.go:104 No registry home found. Create: /tmp/tmp.BXJtfiaEzb/data/registry/filebeat
2020-02-04T18:30:17.179Z INFO registrar/migrate.go:112 Initialize registry meta file
2020-02-04T18:30:17.192Z INFO registrar/registrar.go:108 No registry file found under: /tmp/tmp.BXJtfiaEzb/data/registry/filebeat/data.json. Creating a new re
gistry file.
2020-02-04T18:30:17.193Z INFO registrar/registrar.go:145 Loading registrar data from /tmp/tmp.BXJtfiaEzb/data/registry/filebeat/data.json
2020-02-04T18:30:17.193Z INFO registrar/registrar.go:152 States Loaded from registrar: 0
2020-02-04T18:30:17.193Z WARN beater/filebeat.go:368 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch out
put is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-02-04T18:30:17.193Z INFO crawler/crawler.go:72 Loading Inputs: 1
2020-02-04T18:30:17.194Z INFO log/input.go:152 Configured paths: [/tmp/tmp.BXJtfiaEzb/*.log]
2020-02-04T18:30:17.206Z INFO input/input.go:114 Starting input of type: log; ID: 13918413832820009056
2020-02-04T18:30:17.225Z INFO input/input.go:167 Stopping Input: 13918413832820009056
2020-02-04T18:30:17.225Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2020-02-04T18:30:17.225Z INFO log/harvester.go:251 Harvester started for file: /tmp/tmp.BXJtfiaEzb/dcbgw-20200124080032_darkblue.log
2020-02-04T18:30:17.231Z INFO beater/filebeat.go:384 Running filebeat once. Waiting for completion ...
2020-02-04T18:30:17.231Z INFO beater/filebeat.go:386 All data collection completed. Shutting down.
2020-02-04T18:30:17.231Z INFO crawler/crawler.go:139 Stopping Crawler
2020-02-04T18:30:17.231Z INFO crawler/crawler.go:149 Stopping 1 inputs
2020-02-04T18:30:17.258Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.41.6:5044))
2020-02-04T18:30:17.296Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.41.6:5044)) established
... Only metrics here ...
2020-02-04T18:35:55.686Z INFO log/harvester.go:274 End of file reached: /tmp/tmp.BXJtfiaEzb/dcbgw-20200124080032_darkblue.log. Closing because close_eof is enabled.
2020-02-04T18:35:55.686Z INFO crawler/crawler.go:165 Crawler stopped
... MORE METRICS ...
2020-02-04T18:36:26.609Z ERROR logstash/async.go:256 Failed to publish events caused by: read tcp 192.168.41.6:49662->192.168.41.6:5044: i/o timeout
2020-02-04T18:36:26.621Z ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-04T18:36:28.520Z ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-02-04T18:36:28.520Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.41.6:5044))
2020-02-04T18:36:28.521Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.41.6:5044)) established
... MORE METRICS ...
From this I'm outputing this to Logstash 7.5.2 running in the same Ubuntu 18 VM. Running Logstash with log level trace does not output any error.

Objective-C Travis failed for 'Test runner exited before starting test execution'

Here is my code in .travis.yml file,after run build is passing, but I found the TEST are failed. I have no idea about this and block me a few days, appreciate your help. THX
--- Error as below ---
2018-12-20 05:57:42.698 xcodebuild[2394:8186] [MT]
IDETestOperationsObserverDebug: (CB8F382A-5B77-4BDA-BC4F-6EDF7B7DB822)
Beginning test session aaaaTests-CB8F382A-5B77-4BDA-BC4F-6EDF7B7DB822 at 2018-
12-20 05:57:42.698 with Xcode 10A255 on target <DVTiPhoneSimulator:
0x7f86387e49d0> {
SimDevice: iPhone 6 (B25EC6DB-0B2F-4920-B6C2-8560331BA779, iOS 9.1, Booted)
} (9.1 (13B143))
2018-12-20 05:57:53.744 xcodebuild[2394:8186] [MT]
IDETestOperationsObserverDebug: 11.068 elapsed -- Testing started completed.
2018-12-20 05:57:53.744 xcodebuild[2394:8186] [MT]
IDETestOperationsObserverDebug: 0.000 sec, +0.000 sec -- start
2018-12-20 05:57:53.744 xcodebuild[2394:8186] [MT]
IDETestOperationsObserverDebug: 11.068 sec, +11.068 sec -- end
2018-12-20 05:57:53.746 xcodebuild[2394:8186] Error
Domain=IDETestOperationsObserverErrorDomain Code=6 "Early unexpected exit,
operation never finished bootstrapping - no restart will be attempted"
UserInfo={NSLocalizedDescription=Early unexpected exit, operation never
finished bootstrapping - no restart will be attempted,
NSUnderlyingError=0x7f86389df930 {Error
Domain=IDETestOperationsObserverErrorDomain Code=5 "Test runner exited before
starting test execution." UserInfo={NSLocalizedDescription=Test runner exited
before starting test execution., NSLocalizedRecoverySuggestion=If you believe
this error represents a bug, please attach the result bundle at
/Users/travis/Library/Developer/Xcode/DerivedData/aaaa-
bdscsxnorhzvygafnsqdiqoriugx/Logs/Test/Run-aaaa-2018.12.20_05-57-22-
+0000.xcresult}}}
Testing failed:
aaaa.app (2488) encountered an error (Early unexpected exit, operation never finished bootstrapping - no restart will be attempted. Underlying error: Test runner exited before starting test execution.)
** TEST FAILED **
The command "set -o pipefail && xcodebuild -workspace aaaa.xcworkspace -scheme aaaa -destination platform\=iOS\ Simulator,OS\=9.1,name\=iPhone\ 6 build test | xcpretty" exited with 65.
Done. Your build exited with 1.
---My code as below---
language: objective-c
osx_image: xcode10
xcode_workspace: aaaa.xcworkspace
xcode_scheme: aaaa
xcode_destination: platform=iOS Simulator,OS=9.1,name=iPhone 6
before_install:
- pod repo update
- npm install ios-sim -g
- ios-sim start --devicetypeid "com.apple.CoreSimulator.SimDeviceType.iPhone-6, 9.1"
The issue can fix by below step:
change the version of osx_image in .travis.yml file and retry
// if xcode10.1 build failed, you can change to other version, like xcode9.4
osx_image: xcode10.1
Travis support xcode version list:
https://docs.travis-ci.com/user/languages/objective-c/

Submitting Pig job from Google Cloud Dataproc does not add custom jars to Pig classpath

I'm trying to submit a Pig job via Google Cloud Dataproc and include a custom jar that implements a custom load function I use in the Pig script, but I can't find out how to do that.
Adding my custom jar through the UI appears DOES NOT add it to the Pig classpath.
Here's the output of the Pig job, showing it fails to find my class:
17/03/29 16:12:21 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
17/03/29 16:12:21 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
17/03/29 16:12:21 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2017-03-29 16:12:21,961 [main] INFO org.apache.pig.Main - Apache Pig version 0.16.0 (r: unknown) compiled Nov 27 2016, 23:14:51
2017-03-29 16:12:21,961 [main] INFO org.apache.pig.Main - Logging error messages to: /tmp/cb3b0696-3f30-4db4-a6a7-bb716d2a8a89/pig_1490803941959.log
2017-03-29 16:12:22,379 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2017-03-29 16:12:22,379 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2017-03-29 16:12:22,379 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://aspen-dp-central-m
2017-03-29 16:12:22,404 [main] INFO com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase - GHFS version: 1.6.0-hadoop2
2017-03-29 16:12:22,890 [main] INFO org.apache.pig.PigServer - Pig Script ID for the session: PIG-default-e53a2851-efe5-4e74-bf33-89dfe0733386
2017-03-29 16:12:22,890 [main] WARN org.apache.pig.PigServer - ATS is disabled since yarn.timeline-service.enabled set to false
2017-03-29 16:12:23,247 [main] ERROR org.apache.pig.PigServer - exception during parsing: Error during parsing. Could not resolve com.turner.pig.load.HBaseMultiScanLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Failed to parse: Pig script failed to parse:
<line 8, column 13> pig script failed to validate: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve com.turner.pig.load.HBaseMultiScanLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:199)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1819)
at org.apache.pig.PigServer$Graph.access$000(PigServer.java:1527)
at org.apache.pig.PigServer.parseAndBuild(PigServer.java:460)
at org.apache.pig.PigServer.executeBatch(PigServer.java:485)
at org.apache.pig.PigServer.executeBatch(PigServer.java:471)
at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:172)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:742)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:231)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:206)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
at org.apache.pig.Main.run(Main.java:532)
at org.apache.pig.Main.main(Main.java:176)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by:
<line 8, column 13> pig script failed to validate: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve com.turner.pig.load.HBaseMultiScanLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
at org.apache.pig.parser.LogicalPlanBuilder.validateFuncSpec(LogicalPlanBuilder.java:1339)
at org.apache.pig.parser.LogicalPlanBuilder.buildFuncSpec(LogicalPlanBuilder.java:1324)
at org.apache.pig.parser.LogicalPlanGenerator.func_clause(LogicalPlanGenerator.java:5184)
at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3515)
at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1625)
at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:1102)
at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:560)
at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:421)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:191)
... 19 more
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve com.turner.pig.load.HBaseMultiScanLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
at org.apache.pig.impl.PigContext.resolveClassName(PigContext.java:671)
at org.apache.pig.parser.LogicalPlanBuilder.validateFuncSpec(LogicalPlanBuilder.java:1336)
... 27 more
2017-03-29 16:12:23,251 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1070: Could not resolve com.turner.pig.load.HBaseMultiScanLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Details at logfile: /tmp/cb3b0696-3f30-4db4-a6a7-bb716d2a8a89/pig_1490803941959.log
2017-03-29 16:12:23,269 [main] INFO org.apache.pig.Main - Pig script completed in 1 second and 477 milliseconds (1477 ms)
Job output is complete
Registering the custom jar inside the Pig script solves the problem.
So, basically:
Added my jar file to Google Storage
Registered the jar inside the script
Submitted Pig job either via UI or command line below:
gcloud dataproc jobs submit pig --cluster eduboom-central --file custom.pig
--jars=gs://eduboom-dataproc/custom/eduboom.jar
custom.pig:
register eduboom.jar;
raw = LOAD 'hbase://eduboom_table'
USING com.eduboom.pig.load.HBaseMultiScanLoader('2017-03-30T14:00Z_00', '2017-03-30T14:01Z_25', 'cf:*')
AS (key:chararray, data);
DUMP raw;

glassfish 4.1.1 cluster ssh node error

I`m trying to create a cluster with two nodes using glassfish 4.1.1 build 1.
One node is local, an the other one is ssh.The node is working as if I ping it it responds ok. ( Successfully made SSH connection to node node2 (gfNode2))
I have setup ssh, create the node, create one instance (i2) on that node but when i want to start the instance i get:
i2: Could not start instance i2 on node node2 (gfNode2). Command failed on node node2 (gfNode2): Previous synchronization failed at Sep 10, 2016 12:25:27 PM Will perform full synchronization. Removing all cached state for instance i2. CLI802 Synchronization failed for directory config, caused by: remote failure: SynchronizeFiles: Exception reading request Command start-local-instance failed. To complete this operation run the following command locally on host gfNode2 from the GlassFish install location /opt/glassfish4: lib/nadmin start-local-instance --node node2 --sync normal i2
if i run this command on node 2 machine i get:
./nadmin start-local-instance --node node2 --sync normal i2
Previous synchronization failed at Sep 10, 2016 12:25:27 PM
Will perform full synchronization.
Removing all cached state for instance i2.
Enter admin user name> admin
Enter admin password for user "admin">
CLI802 Synchronization failed for directory config, caused by:
remote failure: SynchronizeFiles: Exception reading request
Command start-local-instance failed.
any idea what to try next ?
Update:
The DAS is reachabe, the ssh is working properly (ping-node-ssh works from das).
What I have noticed is that even after i have installed (install-node-ssh) and create with(create-node-ssh), node 2 has no files inside.
At /glassfish4/glassfish/nodes/node2/i2 there is only one file: .syncstate which is empty. The node2/i2 directories are there but nothing in i2. Maybe due to : Removing all cached state for instance i2.
That is what i got in DAS logs:
[2016-09-10T19:31:14.806+0000] [glassfish 4.1] [WARNING] [] [javax.enterprise.system.core] [tid: _ThreadID=106 _ThreadName=admin- listener(5)] [timeMillis: 1473535874806] [levelValue: 900] [[
Could not start instance i2 on node node2 (gfNode2).: Command ' /opt/glassfish4/glassfish/lib/nadmin --_auxinput - --interactive=false start-local-instance --node node2 --sync normal i2' failed on node node2 (gfNode2): Previous synchronization failed at Sep 10, 2016 12:25:27 PM
Will perform full synchronization.
Removing all cached state for instance i2.
Command start-local-instance failed.
CLI802 Synchronization failed for directory config, caused by:
remote failure: SynchronizeFiles: Exception reading request: To complete this operation run the following command locally on host gfNode2 from the GlassFish install location /opt/glassfish4:
lib/nadmin start-local-instance --node node2 --sync normal i2]]
[2016-09-10T19:31:14.818+0000] [glassfish 4.1] [SEVERE] [] [org.glassfish.admingui] [tid: _ThreadID=102 _ThreadName=admin-listener(1)] [timeMillis: 1473535874818] [levelValue: 1000] [[
RestResponse.getResponse() gives FAILURE. endpoint = 'https://localhost:4848/management/domain/servers/server/i2/start-instance'; attrs = '{}']]
[2016-09-10T19:31:14.820+0000] [glassfish 4.1] [SEVERE] [] [org.glassfish.admingui] [tid: _ThreadID=102 _ThreadName=admin-listener(1)] [timeMillis: 1473535874820] [levelValue: 1000] [[
Error in instanceAction ;
endpoint=https://localhost:4848/management/domain/servers/server/i2/start-instance;attrsMap=null]]
If I try to run the command from node2 I got what is showed on first code block of the post...
The problem here is that the remote instance i2 can't communicate with the DAS to download its configuration.
You will need to verify:
Is the DAS online?
Is the server where the DAS is reachable by the remote node?
Is SSH communication working properly? (use the asadmin command ping-node-ssh
If you open the server.log file for the instance and on the DAS, that should give you a more detailed error message and indicate whether or not the request is reaching the DAS.
The instance logs are located in:
$GLASSFISH_HOME/glassfish/nodes/node2/i2/logs/server.log
The domain logs are located in:
$GLASSFISH_HOME/glassfish/domains/domain1/logs/server.log