Provisioning a DSE cluster with the lifecycle manager fails consitently. Master node (also the one OpsCenter is running on) installed correctly. Each one of the other nodes fails the install (also config) task. Have double-checked the SSH credentials and ports. Any ideas on how to investigate further and fix the issue would be great.
Please excuse the length - trying to provide all of the relevant info.
Ubuntu 14.04.4,
JRE: 1.8.0.91,
DSE 5.0.0
job events:
...
"results": [
{
"event-subtype": "start",
"event-type": "milestone",
"message": "job started...",
...
},
{
"event-subtype": "invocation",
"event-type": "shell-command",
"message": "Invoked command: if [ -x $(which yum) ] && [ -f /etc/redhat-release -o -f /etc/SuSE-release ]; then echo -n yum; elif [ -x $(which apt-get) ]; then echo -n apt; fi"
...
},
{
"event-subtype": "uploaded-facts",
"event-type": "milestone",
"message": "Uploaded facts to OpsCenter server",
...
},
{
"event-subtype": "meld-error",
"event-type": "error",
"message": "Unexpected error executing meld",
...
},
{
"event-subtype": "MeldError",
"event-type": "error",
"message": "Meld failed on: name=\"NODE-2\" ssh-management-address=\"<IP>\" node-id=\"<node-id>\" job-id=\"<job-id>\" stdout=\"\r\n\" stderr=\"\"",
...
}
]
opscenterd.log
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:16,848 [opscenterd] INFO: Install job started for node name="NODE-2" ssh-management-address="<IP>" node-id="<node-id>" (async-thread-macro-53)
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:16,850 [opscenterd] INFO: using ssh-private-key (async-thread-macro-53)
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:18,478 [opscenterd] INFO: Received milestone from node name="NODE-2" ssh-management-address="<IP>" node-id="<node-id>" message="Uploaded facts to OpsCenter server" job-id="a630c081-6ac1-4b00-ac08-18fef320e0d5" (MainThread)
/var/log/opscenter/opscenterd.log:2016-07-02 16:34:18,675 [opscenterd] ERROR: Received error from node event-subtype="meld-error" job-id="a630c081-6ac1-4b00-ac08-18fef320e0d5" name="NODE-2" traceback="Traceback (most recent call last):
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 3313, in run
/var/log/opscenter/opscenterd.log- rc = engine.go()
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 2991, in go
/var/log/opscenter/opscenterd.log- self.file_manager.get_config_files()
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 1280, in get_config_files
/var/log/opscenter/opscenterd.log- {\"accept\": \"application/json\"})
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 598, in get
/var/log/opscenter/opscenterd.log- return json.loads(response.read())
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/socket.py\", line 351, in read
/var/log/opscenter/opscenterd.log- data = self._sock.recv(rbufsize)
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/httplib.py\", line 549, in read
/var/log/opscenter/opscenterd.log- return self._read_chunked(amt)
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/httplib.py\", line 609, in _read_chunked
/var/log/opscenter/opscenterd.log- value.append(self._safe_read(amt))
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/httplib.py\", line 666, in _safe_read
/var/log/opscenter/opscenterd.log- raise IncompleteRead(''.join(s), amt)
/var/log/opscenter/opscenterd.log:IncompleteRead: IncompleteRead(4153 bytes read, 4039 more expected)" ssh-management-address="<IP>" node-id="<node-id>" event-type="error" message="Unexpected error executing meld" (MainThread)
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:18,892 [opscenterd] ERROR: Install job a630c081-6ac1-4b00-ac08-18fef320e0d5 failed! (async-thread-macro-54)
/var/log/opscenter/opscenterd.log:2016-07-02 16:34:19,105 [opscenterd] ERROR: Meld failed on: name="NODE-2" ssh-management-address="<IP>" node-id="<node-id>" job-id="a630c081-6ac1-4b00-ac08-18fef320e0d5" stdout="
/var/log/opscenter/opscenterd.log-" stderr="" (async-thread-macro-53)
Thank you
EDIT: Captured the HTTP traffic between NODE2 and master. The error occurs while transferring config files. One of them is not transferred completely for some reason. The json looks resonable until some gibberish appears.
{"filename": "dse.yaml", "contents": {"internode_messaging_options": {"client_worker_threads": 16, "port": 8609, "server_worker_threads": 16, "server_acceptor_thread
Yvatv+~UK{.kMI4^QOrqQTDX_3"DPm,v!"H&M$!1M7
LRYCs{l>-df;cj
W6C9dq
The config files are valid and do work on the master node. Only the replication fails.
OpsCenter LCM developer here. Your issue is caused by OPSC-8851 in the LCM known issues list: http://docs.datastax.com/en/opscenter/6.0/opsc/release_notes/opscReleaseNotes600.html
This is only triggered under certain network conditions and was discovered too close to release to get fixed in 6.0.0. It's a high priority though, and will be fixed in a subsequent release soon. Unfortunately, I don't think there's anything you can do to work around this in the field. If you're a DataStax customer, you could contact support and potentially get a patch now to workaround the issue... otherwise the only thing I can suggest is to watch the upcoming release notes.
Edit: I should also note that in our tests the issue is intermittent. LCM is designed so you can rerun failed jobs safely (aka it's idempotent) so in all but the most extreme cases you can also work around this just by rerunning your job.
You can specify the private IP for Listen Address and 0.0.0.0 for broadcast address and LCM should be able to provision appropriately.
Related
Running the following task on my playbook:
- name: "Check if hold file exists..."
amazon.aws.aws_s3:
region: <region>
bucket: <bucket>
prefix: "hold_jobs_{{ env }}"
mode: list
register: s3_files
I Got the following error:
{
"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_amazon.aws.aws_s3_payload_b92r42e3/ansible_amazon.aws.aws_s3_payload.zip/ansible_collections/amazon/aws/plugins/modules/aws_s3.py\", line 427, in bucket_check\n File \"/var/lib/awx/venv/ansible/lib/python3.6/site-packages/botocore/client.py\", line 357, in _api_call\n return self._make_api_call(operation_name, kwargs)\n File \"/var/lib/awx/venv/ansible/lib/python3.6/site-packages/botocore/client.py\", line 661, in _make_api_call\n raise error_class(parsed_response, operation_name)\nbotocore.exceptions.ClientError: An error occurred (400) when calling the HeadBucket operation: Bad Request\n",
"boto3_version": "1.9.223",
"botocore_version": "1.12.253",
"error": {
"code": "400",
"message": "Bad Request"
}
I am using this configuration:
ansible [core 2.11.7]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/var/lib/awx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
ansible collection location = /var/lib/awx/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
jinja version = 2.10.1
libyaml = True
I need help to solve this, thanks in advance. For this moment I am using a paliative solution, with shell module.
I am trying to get CloudWatch running properly on my Lightsail instance, which I appear to achieved with only partial success.
I have ran the Wizard using sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard which has produced a config file outlining numerous metrics including cpu, memory and disk usage as outlined here. The service loads and starts the config file, and doesn't complain about invalid json (this did happen a few times, but I fixed it).
I can stop the service with sudo amazon-cloudwatch-agent-ctl -a stop
I then reload the config with sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -s -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json
Verify the service is running: sudo amazon-cloudwatch-agent-ctl -a status
Which outputs this:
{
"status": "running",
"starttime": "2022-01-10T21:53:12+00:00",
"configstatus": "configured",
"cwoc_status": "stopped",
"cwoc_starttime": "",
"cwoc_configstatus": "not configured",
"version": "1.247349.0b251399"
}
Logging into my CloudWatch console, I can see the data being received, and the single line appearing on the graph there corresponds to the times that I started and stopped the service-- so it's definitely doing something. And yet... the only metric that appears on that graph is mem_used_percent... why? Why only this one metric? Where is the rest of my data pertaining to cpu, etc? What am I doing wrong?
Here is my config.json, which as I said, is being loaded by the service without issue.
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"metrics": {
"append_dimensions": {
"ImageID": "${aws:ImageId}",
"InstanceId":"${aws:InstanceId}",
"InstanceType":"${aws:InstanceType}"
},
"metrics_collected": {
"cpu": {
"resources": [
"*"
],
"measurement": [
"cpu_usage_active"
],
"metrics_collection_interval": 60,
"totalcpu": false
},
"disk": {
"measurement": [
"free",
"total",
"used",
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_active",
"mem_available",
"mem_available_percent",
"mem_free",
"mem_total",
"mem_used",
"mem_used_percent"
],
"metrics_collection_interval": 60
},
"netstat": {
"measurement": [
"tcp_established",
"udp_socket"
]
}
}
}
}
Any help greatly appreciated here. TIA.
You likely haven't fetched the configuration yet.
Check the logfile, i.e. /opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log, to see which inputs are loaded:
2022-05-18T10:18:57Z I! Loaded inputs: mem disk
To fetch the configuration, do as follows (you'll need to adapt this to your environment - this is for systemd, on-premise, without SSM):
sudo amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json
sudo systemctl status amazon-cloudwatch-agent.service restart
After:
2022-05-18T11:45:05Z I! Loaded inputs: mem net netstat swap cpu disk diskio
Maybe you face the same issue as I did. In my case two configuration json files
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/file_config.json
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
were merged.
The files are then translated to
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml.
When I was checking the file, only the mem definition of /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/file_config.json was taken. Thus, I deleted the file and restarted the service.
sudo systemctl restart amazon-cloudwatch-agent
After the restart, the toml file contained what I expected and the metrics were in place.
I am following a tutorial in order to learn how to work with the serverless framework. The goal is to deploy a Django application. This tutorial suggests putting the necessary environment variables in a separate yml-file. Unfortunately, following the tutorial gets me a KeyError.
I have a serverless.yml, variables.yml and a handler.py. I will incert all code underneath, together with the given error.
serverless.yml:
service: serverless-django
custom: ${file(./variables.yml)}
provider:
name: aws
runtime: python3.8
functions:
hello:
environment:
- THE_ANSWER: ${self:custom.THE_ANSWER}
handler: handler.hello
variables.yml:
THE_ANSWER: 42
handler.py:
import os
def hello(event, context):
return {
"statusCode": 200,
"body": "The answer is: " + os.environ["THE_ANSWER"]
}
The error in my terminal:
{
"errorMessage": "'THE_ANSWER'",
"errorType": "KeyError",
"stackTrace": [
" File \"/var/task/handler.py\", line 7, in hello\n \"body\": \"The answer is: \" + os.environ[\"THE_ANSWER\"]\n",
" File \"/var/lang/lib/python3.8/os.py\", line 675, in __getitem__\n raise KeyError(key) from None\n"
]
}
Error --------------------------------------------------
Error: Invoked function failed
at AwsInvoke.log (/snapshot/serverless/lib/plugins/aws/invoke/index.js:105:31)
at AwsInvoke.tryCatcher (/snapshot/serverless/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/snapshot/serverless/node_modules/bluebird/js/release/promise.js:547:31)
at Promise._settlePromise (/snapshot/serverless/node_modules/bluebird/js/release/promise.js:604:18)
at Promise._settlePromise0 (/snapshot/serverless/node_modules/bluebird/js/release/promise.js:649:10)
at Promise._settlePromises (/snapshot/serverless/node_modules/bluebird/js/release/promise.js:729:18)
at _drainQueueStep (/snapshot/serverless/node_modules/bluebird/js/release/async.js:93:12)
at _drainQueue (/snapshot/serverless/node_modules/bluebird/js/release/async.js:86:9)
at Async._drainQueues (/snapshot/serverless/node_modules/bluebird/js/release/async.js:102:5)
at Immediate._onImmediate (/snapshot/serverless/node_modules/bluebird/js/release/async.js:15:14)
at processImmediate (internal/timers.js:456:21)
at process.topLevelDomainCallback (domain.js:137:15)
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: linux
Node Version: 12.18.1
Framework Version: 2.0.0 (standalone)
Plugin Version: 4.0.2
SDK Version: 2.3.1
Components Version: 3.1.2
The command i'm trying is 'sls invoke -f hello'. The command 'sls deploy' has been already executed succesfully.
I am new to serverless, so please let me know how to fix this, or if any more information is needed.
First of all, there is an error in yml script:
Serverless Error ---------------------------------------
Invalid characters in environment variable 0
The error is defining environment variables as an array instead of key-value pairs
And then after deployment, everything works smoothly (sls deploy -v)
sls invoke --f hello
{
"statusCode": 200,
"body": "The answer is: 42"
}
serverless.yml
service: sls-example
custom: ${file(./variables.yml)}
provider:
name: aws
runtime: python3.8
functions:
hello:
environment:
THE_ANSWER: ${self:custom.THE_ANSWER}
handler: handler.hello
variables.yml
THE_ANSWER: 42
I have 2 centos 7.1 nodes and I'm trying to get flocker running on it. I've followed the installation steps exactly however when it comes to running the following command to test to see if flocker-docker-plugin works:
docker run -v apples:/data --volume-driver flocker busybox sh -c "echo hello > /data/file.txt"
I get the error:
Error response from daemon: Error looking up volume plugin flocker: Plugin not found
The flocker-docker-plugin logs show the following:
{"request_body": null, "url": "https://foo.bar.com:4523/v1/state/nodes/by_era/b72bb203-b174-4241-a03a-6171cbc10f30", "timestamp": 1451201332.659948, "action_status": "started", "task_uuid": "1ae63069-286c-44fa-9dd2-6751ca0efe63", "action_type": "flocker:apiclient:http_request", "method": "GET", "task_level": [1]}
{"task_uuid": "1ae63069-286c-44fa-9dd2-6751ca0efe63", "error": false, "timestamp": 1451201332.749499, "message": "Starting factory <twisted.web.client._HTTP11ClientFactory instance at 0x2fc7710>", "message_type": "twisted:log", "task_level": [3]}
{"exception": "twisted.web._newclient.ResponseNeverReceived", "task_level": [4], "action_type": "flocker:apiclient:http_request", "reason": "[<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]", "timestamp": 1451201333.050012, "task_uuid": "1ae63069-286c-44fa-9dd2-6751ca0efe63", "action_status": "failed"}
{"task_uuid": "c8d28668-f21b-4863-bf20-6c30f54c3d25", "error": true, "timestamp": 1451201333.05045, "message": "Unhandled Error\nTraceback (most recent call last):\nFailure: twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]\n", "message_type": "twisted:log", "task_level": [1]}
{"task_uuid": "36f1ddd5-c5fa-4438-85d7-131e7752f8d3", "error": true, "timestamp": 1451201333.050727, "message": "main function encountered error\nTraceback (most recent call last):\nFailure: twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]\n", "message_type": "twisted:log", "task_level": [1]}
{"task_uuid": "08bd8f13-0a8e-43f4-8b80-b4cf5b317f00", "error": false, "timestamp": 1451201333.051034, "message": "Stopping factory <twisted.web.client._HTTP11ClientFactory instance at 0x2fc7710>", "message_type": "twisted:log", "task_level": [1]}
{"task_uuid": "8f0fd1ef-19ca-4033-b16f-6d42e33eda1a", "error": false, "timestamp": 1451201333.052711, "message": "Main loop terminated.", "message_type": "twisted:log", "task_level": [1]}
flocker-docker-plugin.service: main process exited, code=exited, status=1/FAILURE
Unit flocker-docker-plugin.service entered failed state.
flocker-docker-plugin.service failed.
flocker-docker-plugin.service holdoff time over, scheduling restart.
Started Flocker Docker Plugin.
Starting Flocker Docker Plugin...
Also running uft-flocker-volumes --control-service=foo.bar.net list-nodes returns:
jonathan#ubuntu:~/Flocker/sc-test-cluster$ uft-flocker-volumes --control-service=foo.bar.com list-nodes
Unhandled Error
Traceback (most recent call last):
Failure: twisted.web._newclient.ResponseNeverReceived
[<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]
Update:
I tried downgrading to docker 1.8.2 and tried rerunning the command, didn't work, same error.
Output of ls /etc/flocker:
[root#sc-test2 jonathan]# ls /etc/flocker
agent.yml cluster.crt node.crt node.key plugin.crt plugin.key
[root#sc-test1 jonathan]# ls /etc/flocker
agent.yml control-service.crt node.crt plugin.crt
cluster.crt control-service.key node.key plugin.key
Update:1/1/2016
I set the following environment variables as per the kubernetes docs http://kubernetes.io/v1.1/examples/flocker/.
export FLOCKER_CONTROL_SERVICE_HOST=foo.bar.com
export FLOCKER_CONTROL_SERVICE_CA_FILE=/etc/flocker/cluster.crt
export FLOCKER_CONTROL_SERVICE_CLIENT_CERT_FILE=/etc/flocker/node.crt
export FLOCKER_CONTROL_SERVICE_CLIENT_KEY_FILE=/etc/flocker/node.key
export FLOCKER_CONTROL_SERVICE_PORT=4523
And I got a different error when running the command
jonathan#ubuntu:~/Flocker/sc-test-cluster$ uft-flocker-volumes --control-service=sc-test1.cloudapp.net list-nodes
wget: error getting response: Connection reset by peer
===========================================================================
Unable to establish network connectivity from inside a container.
If you see an error message above, that may give you a clue how to fix it.
If you run docker in a VM, restarting the VM often helps, especially if
you have changed network (and/or DNS servers) since starting the VM.
If you are using docker-machine (e.g. as part of docker toolbox), you can
run the following command (or similar) to do that:
docker-machine restart default && eval $(docker-machine env default)
To ignore this check, and proceed anyway (e.g. if you know you are offline)
set IGNORE_NETWORK_CHECK=1
===========================================================================
So I set the flag to see what happens:
jonathan#ubuntu:~/Flocker/sc-test-cluster$ export IGNORE_NETWORK_CHECK=1
And bam! The same error :(
jonathan#ubuntu:~/Flocker/sc-test-cluster$ uft-flocker-volumes --control-service=sc-test1.cloudapp.net list-nodes
Unhandled Error
Traceback (most recent call last):
Failure: twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure <class 'OpenSSL.SSL.Error'>>]
Is the wget error a helpful clue as to what may be happening?
Error response from daemon: Error looking up volume plugin flocker: Plugin not found
This might be due to the control-service hostname on the agent.yml is not configured correctly. Make sure it's the host of the control-server node, not the agent node itself.
at the time of running titanium application i am getting error, i have tried all the solutions suggested by google, but problem still remains, please help if anyone can understand the error log i pasted below. Thanks.
Appcelerator Command-Line Interface, version 4.1.2
Copyright (c) 2014-2015, Appcelerator, Inc. All Rights Reserved.
TRACE | __command__ search paths:
[
"C:\\Users\\acer\\.appcelerator\\install\\4.1.2\\package",
"C:\\Users\\acer\\.appcelerator\\install\\4.1.2\\package\\node_modules",
"C:\\Users\\acer\\Desktop\\node_modules",
"C:\\Users\\acer\\node_modules",
"C:\\Users\\node_modules",
"C:\\node_modules",
"C:\\Users\\acer\\.appcelerator\\.npm\\lib\\node_modules"
]
DEBUG | [PLUGIN-LOAD] 0ms C:\Users\acer\.appcelerator\install\4.1.2\package\appc.js
DEBUG | [PLUGIN-LOAD] 623ms C:\Users\acer\.appcelerator\install\4.1.2\package\node_modules\appc-cli-titanium\appc.js
DEBUG | [PLUGIN-LOAD] 2ms C:\Users\acer\.appcelerator\install\4.1.2\package\node_modules\arrow\appc.js
log level set to "trace"
executing command "run"
set environment to {"registry":"https://software.appcelerator.com","security":"https://security.appcelerator.com","baseurl":"https://platform.appcelerator.com"}
checking credentials for existing session
Attempting to load session info from config file
check if session is invalidated
session expiry 1439531789135 false
+---------------------------------------------------------------------------------------+
+ This is a Developer trial account. You may use this software for evaluation purposes. +
+ Once you are ready to go to production, upgrade at https://billing.appcelerator.com +
+---------------------------------------------------------------------------------------+
Arrow Cloud config file: C:\Users\acer\.acs
found Arrow Cloud login { mid: 'dfa5b79881cc650462ff93ae5df5f65d5760e1cd',
publishPort: 443,
publishHost: 'https://admin.cloudapp-enterprise.appcelerator.com',
username: 'mymail',
cookie: [ 'connect.sid=s%3Ab9eP%2F3NylwuSkAqEtFC3La1k.RDNuz84uHznd9QjZUQ9Rm6UxHfan0GZcGKug2GtaQjg; Path=/; Expires=Fri, 14 Aug 2015 05:56:35 GMT; HttpOnly' ],
defaultEP:
{ publishHost: 'https://admin.cloudapp-enterprise.appcelerator.com',
publishPort: 443 } } , checking nodeACSEndpoint= https://admin.cloudapp-enterprise.appcelerator.com
Arrow Cloud cookie expiry [ 1439531795000 ]
session already loaded in opts.session
getCredentials() session:
{
"ipaddress": "192.168.1.13",
"username": "mymail",
"password": "<OMITTED>",
"session": "<OMITTED>",
"nonce": "<OMITTED>",
"environment": {
"name": "production",
"isProduction": true,
"acsBaseUrl": "https://api.cloud.appcelerator.com",
"acsAuthBaseUrl": "https://secure-identity.cloud.appcelerator.com",
"nodeACSEndpoint": "https://admin.cloudapp-enterprise.appcelerator.com"
},
"token": "<OMITTED>",
"fingerprint": "dfa5b79881cc650462ff93ae5df5f65d5760e1cd",
"fingerprint_description": "Windows Machine ID: bcd68e35-cc53-4567-ab68-66246b73cc67",
"org_id": 100057656,
"expiry": 1439531789135
}
loading plugins for command "run"
run search paths:
[
"C:\\Users\\acer\\.appcelerator\\install\\4.1.2\\package",
"C:\\Users\\acer\\.appcelerator\\install\\4.1.2\\package\\node_modules",
"C:\\Users\\acer\\Desktop\\node_modules",
"C:\\Users\\acer\\node_modules",
"C:\\Users\\node_modules",
"C:\\node_modules",
"C:\\Users\\acer\\.appcelerator\\.npm\\lib\\node_modules"
]
[PLUGIN-LOAD] 0ms C:\Users\acer\.appcelerator\install\4.1.2\package\appc.js
[PLUGIN-LOAD] 251ms C:\Users\acer\.appcelerator\install\4.1.2\package\node_modules\appc-cli-titanium\appc.js
run plugin: C:\Users\acer\.appcelerator\install\4.1.2\package\node_modules\appc-cli-titanium
[PLUGIN-LOAD] 0ms C:\Users\acer\.appcelerator\install\4.1.2\package\node_modules\arrow\appc.js
run plugin: C:\Users\acer\.appcelerator\install\4.1.2\package\node_modules\arrow
plugin "arrow" failed its "when" function check, skipping...
loading plugin "titanium" for command "run" CLI options via function
loading plugin "titanium" for command "run" CLI options via array
Duplicate option "colors" for command "run", removing...
executing command "run" with the following plugins:
["titanium"]
TRACE | Attempting to load session info from config file
TRACE | check if session is invalidated
TRACE | session expiry 1439531789135 false
TRACE | session already loaded in opts.session
DEBUG | Titanium Downloads Last Checked: 1439447168501
TRACE | "C:\Program Files\nodejs\node.exe" "C:\Users\acer\.appcelerator\install\4.1.2\package\node_modules\appc-cli-titanium\node_modules\titanium\bin\titanium" config -o json-object
TRACE | "C:\Program Files\nodejs\node.exe" "C:\Users\acer\.appcelerator\install\4.1.2\package\node_modules\appc-cli-titanium\node_modules\titanium\bin\titanium" sdk -o json
TRACE | checking for titanium, result:
{ activeSDK: '4.1.0.GA',
defaultInstallLocation: 'C:\\ProgramData\\Titanium',
installLocations:
[ 'C:\\ProgramData\\Titanium',
'C:\\Users\\acer\\AppData\\Roaming\\Titanium',
'C:\\ProgramData\\Application Data\\Titanium' ],
installed: { '4.1.0.GA': 'C:\\ProgramData\\Titanium\\mobilesdk\\win32\\4.1.0.GA' } }
TRACE | C:\Program Files\nodejs\node.exe [ 'C:\\Users\\acer\\.appcelerator\\install\\4.1.2\\package\\node_modules\\appc-cli-titanium\\node_modules\\titanium\\bin\\titanium',
'build',
'--platform',
'android',
'--log-level',
'trace',
'--sdk',
'4.1.0.GA',
'--project-dir',
'D:\\Titanium WS\\ex',
'--target',
'emulator',
'--android-sdk',
'C:\\android-sdk-win',
'--device-id',
'Appp',
'--skip-js-minify',
'--no-colors',
'--no-progress-bars',
'--no-prompt',
'--prompt-type',
'socket-bundle',
'--prompt-port',
'62727',
'--plugin-paths',
'C:\\Users\\acer\\.appcelerator\\install\\4.1.2\\package\\node_modules',
'--config-file',
'C:\\Users\\acer\\AppData\\Local\\Temp\\build-1439448105417.json',
'--no-banner' ]
[WARN] :
TRACE | titanium exited with exit code 1
[ERROR] Application Installer abnormal process termination. Process exit value was 1
After 24 hours i get my project installing on my device. i am unable to get the exact problem but i want to share my tried steps to solve the problem.
I installed all sdk's one by one.
I switched my current work space and created new one. (old work space still gives error)
Now my project is installing fine, every time.
Thanks.