MobileFirst Analytics dashboard does not show any data - ibm-mobilefirst

Our MFP Analytics dashboard was working fine until last week. There is no data shown in the dashboard. Restart the server does not seem to help either. The cluster status on the server is RED. What can I do to resolve this?

I've learned that when the cluster status is red, it is likely due to one or more unassigned shards. The following commands are quite handy and able to solve my issue:
To find the list of all shards, use this CURL command:
curl -XGET http://localhost:9500/_cat/shards
To find the list of unassigned shards, use this CURL command:
curl -XGET http://localhost:9500/_cat/shards | grep UNASSIGNED
Once you get the list of unassigned shards, you can initialize them
with the following command:
for shard in $(curl -XGET http://localhost:9500/_cat/shards | grep UNASSIGNED | awk '{print $2}'); do
curl -XPOST 'localhost:9500/_cluster/reroute' -d '{
"commands" : [ {
"allocate" : {
"index" : “worklight”,
"shard" : $shard,
"node" : "worklightNode_1234",
"allow_primary" : true
}
}
]
}'
sleep 5
done
You need to replace the 'node' with the node that the initialized
shards locate on, in my case, worklightNode_1234. This you can find
from the output in step 1.
Run the following command to check the status:
curl -XGET http://localhost:9500/_cluster/health?pretty
The server status should be green when all the shards are initialized and assigned.

On MobileFirst Platform 7.1, I solved the problem by changing the configuration in the server.xml file of the Analytics server, reducing the number of shards:
<jndiEntry jndiName="analytics/shards" value="5" />
<jndiEntry jndiName="analytics/replicas_per_shard" value="1" />
For default values refer to: http://www.ibm.com/support/knowledgecenter/SSHSCD_7.1.0/com.ibm.worklight.monitor.doc/monitor/c_op_analytics_properties.html

Related

Ignite cluster size using control script

I need to get ignite cluster size(no of server nodes running) preferably using control script or ignite rest api. I am able to get baseline nodes using command below but I don't see any command or rest api to return topology snapshot. Is there a way we could get this information to ignite client rather than looking into logs.
Workaround to get baseline nodes:
baselineNodes=$(kubectl --kubeconfig config.conf exec <ignite-node> -n {client_name} -- /opt/ignite/apache-ignite/bin/./control.sh --baseline | grep "Number of baseline nodes" | cut -d ':' -f2 | sed 's/^ *//g')
It seems that topology REST command could do the trick. Here's the documentation link.
http://host:port/ignite?cmd=top&attr=true&mtr=true&id=c981d2a1-878b-4c67-96f6-70f93a4cd241
Got help from ignite community and below command worked for me. Basically idea is to use metric to extract server nodes.
kubectl --kubeconfig config.conf exec <ignite-node> -n {client_name} -- /opt/ignite/apache-ignite/bin/./control.sh --metric cluster.TotalServerNodes | grep -v "metric" | grep cluster.TotalServerNodes | cut -d " " -f5 | sed 's/^ *//g'
Quoting the reply received:
"You can query any metric value or system view content via control script [1], [2]
control.sh --system-view nodes
or [3]
control.sh —metric cluster.TotalBaselineNodes
control.sh —metric cluster.TotalServerNodes
control.sh —metric cluster.TotalClientNodes
[1] https://ignite.apache.org/docs/latest/tools/control-script#metric-command
[2] https://ignite.apache.org/docs/latest/tools/control-script#system-view-command
[3] https://ignite.apache.org/docs/2.11.1/monitoring-metrics/new-metrics#cluster"

How to see error logs when docker-compose fails

I have a docker image that starts the entrypoint.sh script
This script checks if the project is well configured
If everything is correct,
the container starts
otherwise I received this error:
echo "Danger! bla bla bla"
exit 1000
Now if i start the container in this mode:
docker-compose up
i see the error correctly:
Danger! bla bla bla
but i need to launch the container in daemon mode:
docker-compose up -d
How can I show the log only in case of error?
The -d flag in docker-compose up -d stands for detached mode and not deamon mode.
In detached mode, your service(s) (e.g. container(s)) runs in the background of your terminal. You can't see logs in this mode.
To see all service(s) logs you need to run this command :
docker-compose logs -f
The -f flag stands for "Follow log output".
This will output all the logs for each running service you have in your docker-compose.yml
From my understanding you want to fire up your service(s) with :
docker-compose up -d
In order to let service(s) run in the background and have a clean console output.
And you want to print out only the errors from the logs, to do so add a pipe operator and search for error with the grep command :
docker-compose logs | grep error
This will output all the errors logged by a docker service(s).
You'll find the official documentation related to the docker-compose up command here and to the logs command here. More info on logs-handling in this article.
Related answer here.

Where do I find the project ID for the GitLab API?

I use GitLab on their servers. I would like to download my latest built artifacts (build via GitLab CI) via the API like this:
curl --header "PRIVATE-TOKEN: 9koXpg98eAheJpvBs5tK" "https://gitlab.com/api/v3/projects/1/builds/8/artifacts"
Where do I find this project ID? Or is this way of using the API not intended for hosted GitLab projects?
I just found out an even easier way to get the project id: just see the HTML content of the gitlab page hosting your project. There is an input with a field called project_id, e.g:
<input type="hidden" name="project_id" id="project_id" value="335" />
The latest version of GitLab 11.4 at the time of this writing now puts the Project ID at the top of the frontpage of your repository.
Screenshot:
On the Edit Project page there is a Project ID field in the top right corner.
(You can also see the ID on the CI/CD pipelines page, in the exameple code of the Triggers section.)
In older versions, you can see it on the Triggers page, in the URLs of the example code.
You can query for your owned projects:
curl -XGET --header "PRIVATE-TOKEN: XXXX" "https://gitlab.com/api/v4/projects?owned=true"
You will receive JSON with each owned project:
[
{
"id":48,
"description":"",
"default_branch":"master",
"tag_list":[
...
You are also able to get the project ID from the triggers configuration in your project which already has some sample code with your ID.
From the Triggers page:
curl -X POST \
-F token=TOKEN \
-F ref=REF_NAME \
https://<GitLab Installation>/api/v3/projects/<ProjectID>/trigger/builds
As mentioned here, all the project scoped APIs expect either an ID or the project path (URL encoded).
So just use https://gitlab.com/api/v4/projects/gitlab-org%2Fgitlab-foss directly when you want to interact with a project.
Enter the project.
On the Left Hand menu click Settings -> General -> Expand General Settings
It has a label Project ID and is next to the project name.
This is on version GitLab 10.2
Provide the solution that actually solve the problem the api of getting the project id for specific gitlab project
curl -XGET -H "Content-Type: application/json" --header "PRIVATE-TOKEN: $GITLAB_TOKEN" http://<YOUR-GITLAB-SERVER>/api/v3/projects/<YOUR-NAMESPACE>%2F<YOUR-PROJECT-NAME> | python -mjson.tool
Or maybe you just want the project id:
curl -XGET -H "Content-Type: application/json" --header "PRIVATE-TOKEN: $GITLAB_TOKEN" http://<YOUR-GITLAB-SERVER>/api/v3/projects/<YOUR-NAMESPACE>%2F<YOUR-PROJECT-NAME> | python -c 'import sys, json; print(json.load(sys.stdin)["id"])'
Note that the repo url(namespace/repo name) is encoded.
If you know your project name, you can get the project id by using the following API:
curl --header "Private-Token: <your_token>" -X GET https://gitlab.com/api/v4/projects?search=<exact_project_name>
This will return a JSON that includes the id:
[
{
"id":<project id>, ...
}
]
Just for the record, if someone else has the need to download artifacts from gitlab.com created via gitlab-ci
Create a private token within your browser
Get the project id via curl -XGET --header "PRIVATE-TOKEN: YOUR_AD_HERE?" "https://gitlab.com/api/v3/projects/owned"
Download the last artifact from your master branch created via a gitlab-ci step called release curl -XGET --header "PRIVATE-TOKEN: YOUR_AD_HERE?" -o myapp.jar "https://gitlab.com/api/v3/projects/4711/builds/artifacts/master/download?job=release"
I am very impressed about the beauty of gitlab.
You can view it under the repository name
You can query projects with search attribute e.g:
http://gitlab.com/api/v3/projects?private_token=xxx&search=myprojectname
As of Gitlab API v4, the following API returns all projects that you own:
curl --header 'PRIVATE-TOKEN: <your_token>' 'https://gitlab.com/api/v4/projects?owned=true'
The response contains project id. Gitlab access tokens can be created from this page- https://gitlab.com/profile/personal_access_tokens
No answer suits generic needs, the most similar is intended only for the gitlab site, not specific sites. This can be used to find the ID of the project streamer in the Gitlab server my-server.com, for example:
$ curl --silent --header 'Authorization: Bearer MY-TOKEN-XXXX' \
'https://my-server.com/api/v4/projects?per_page=100&simple=true'| \
jq -rc '.[]|select(.name|ascii_downcase|startswith("streamer"))'| \
jq .id
168
Remark that
this gives only the first 100 projects, if you have more, you should request the pages that follow (&page=2, 3, ...) or run a different API (e.g. groups/:id/projects).
jq is quite flexible. Here we're just filtering a project, you can do multiple things with it.
There appears to be no way to retrieve only the Project ID using the gitlab api. Instead, retrieve all the owner's projects and loop through them until you find the matching project, then return the ID. I wrote a script to get the project ID:
#!/bin/bash
projectName="$1"
namespace="$2"
default=$(sudo cat .namespace)
namespace="${namespace:-$default}"
json=$(curl --header "PRIVATE-TOKEN: $(sudo cat .token)" -X GET
'https://gitlab.com/api/v4/projects?owned=true' 2>/dev/null)
id=0
idMatch=0
pathWithNamespaceMatch=0
rowToMatch="\"$(echo "$namespace/$projectName" | tr '[:upper:]' '[:lower:]')\","
for row in $(echo "${json}" | jq -r '.'); do
[[ $idMatch -eq 1 ]] && { idMatch=0; id=${row::-1}; }
[[ $pathWithNamespaceMatch -eq 1 ]] && { pathWithNamespaceMatch=0; [[ "$row" == "$rowToMatch" ]] && { echo "$id"; return 0; } }
[[ ${row} == "\"path_with_namespace\":" ]] && pathWithNamespaceMatch=1
[[ ${row} == "\"id\":" ]] && idMatch=1
done
echo 'Error! Could not retrieve projectID.'
return 1
It expects the default namespace to be stored in a file .namespace and the private token to be stored in a file .token. For increased security, its best to run chmod 000 .token; chmod 000 .namespace; chown root .namespace; chown root .token
If your project name is unique, it is handy to follow the answer by shunya, search by name, refer API doc.
If you have stronger access token and the Gitlab contains a few same name projects within different groups, then search within group is more convenient. API doc here. e.g.
curl --header "PRIVATE-TOKEN: <token>" -X GET https://gitlab.com/api/v4/groups/<group_id>/search?scope=projects&search=<project_name>
The group ID can be found from the Settings page under the group domain.
And to fetch the project id from the output, you can do:
curl --header "PRIVATE-TOKEN: <token>" -X GET https://gitlab.com/api/v4/groups/<group_id>/search?scope=projects&search=<project_name> | jq '[0].id'
To get id from all projects, use:
curl --header 'PRIVATE-TOKEN: XXXXXXXXXXXXXXXXXXXXXXX' 'https://gitlab.com/api/v4/projects?owned=true' > curloutput
grep -oPz 'name\":\".*?\"|{\"id\":[0-9]+' curloutput | sed 's/{\"/\n/g' | sed 's/name//g' |sed 's/id\"://g' |sed 's/\"//g' | sort -u -n
Not Specific to question, but somehow reached here, might help others
I used chrome to get a project ID
Go to the desired project example gitlab.com/username/project1
Inspect network tab
see the first garphql request in network tab
You can search for the project path
curl -s 'https://gitlab.com/api/v4/projects?search=my/path/to/my/project&search_namespaces=true' --header "PRIVATE-TOKEN: $GITLAB_TOKEN" |python -mjson.tool |grep \"id\"
https://docs.gitlab.com/ee/api/projects.html
Which will only match your project and will not find other unnecessary projects
My favorite method is to pull from the CI/CD pipeline so on build it dynamically assigns the project id.
Simply assign a variable in your code to = CI_PROJECT_ID

How to disable elasticsearch 5.0 authentication?

I'm just start to use elasticsearch. Created an index with default settings (5 shards, 1 replica). I then indexed ~13G text files with the attachment plugin. As a result, it went very slow searching in Kibana Discover. However, searching in the console is fast:
GET /mytext/_search
{
"fields": [ "file.name" ],
"query": {
"match": {
"file.content": "foobar"
}
},
"highlight": {
"fields": {
"file.content": {
}
}
}
}
To investigate why it's so slow, I installed X-Pack. The guide documentation seems not comprehensive, I didn't get to the security config.
The default install of elasticsearch don't have to be logged in, but it have to be logged in after installed X-Pack plugin. I'm confused with the security settings of elasticsearch, kibana, x-pack, do they share the user accounts whatever? After all, I get the authentication works by:
curl -XPUT -uelastic:changeme 'localhost:9200/_shield/user/elastic/_password' -d '{ "password" : "newpass1" }'
curl -XPUT -uelastic:newpass1 'localhost:9200/_shield/user/kibana/_password' -d '{ "password" : "newpass2" }'
Here comes the problem. I can't login using Java client with org.elasticsearch.plugin:shield. It's likely the latest version of the shield dependency (2.3.3) mismatched with the elasticsearch dependency (5.0.0-alpha).
Well, can I just disable the authentication?
From the node config:
GET http://localhost:9200/_nodes
"nodes" : {
"v_XmZh7jQCiIMYCG2AFhJg" : {
"transport_address" : "127.0.0.1:9300",
"version" : "5.0.0-alpha2",
"roles" : [ "master", "data", "ingest" ],
...
"settings" : {
"node" : {
"name" : "Apache Kid"
},
"http" : {
"type" : "security"
},
"transport" : {
"type" : "security",
"service" : {
"type" : "security"
}
},
...
So, can I modify these settings, and the possible values are?
In a test environment I added the following option to elasticsearch.yml, and/or kibana.yml
xpack.security.enabled: false
assuming that your image name is elasticsearch. you can use id if you don't like name
if you run docker you can use this.
go to bash in docker with command
docker exec -i -t elasticsearch /bin/bash
then remove x-pack
elasticsearch-plugin remove x-pack
exit docker
exit
and restart docker image
docker restart elasticsearch
Disclamer: Solution inspired by
Michał Dymel
When using with docker (in local dev), instead of removing the xpack, you can simply disable it.
docker pull docker.elastic.co/elasticsearch/elasticsearch:5.5.3
docker run -p 9200:9200 \
-p 9300:9300 \
-e "discovery.type=single-node" \
-e "xpack.security.enabled=false" \
docker.elastic.co/elasticsearch/elasticsearch:5.5.3
I've managed to authenticate using this xpack_security_enable equals false but I still getting some authentication errors on my kibana log.
elasticsearch:
image: elasticsearch:1.7.6
ports:
- ${PIM_ELASTICSEARCH_PORT}:9200
- 9300:9300
kibana:
image: docker.elastic.co/kibana/kibana:5.4.1
environment:
SERVER_NAME: localhost
ELASTICSEARCH_URL: http://localhost:9200
XPACK_SECURITY_ENABLED: 'false'
ports:
- 5601:5601
links:
- elasticsearch
depends_on:
- elasticsearch
This is my current setup, on the kibana I can see some errors:
KIBANA dashboard
On kibana logs I can see:
kibana_1 | {"type":"log","#timestamp":"2017-06-15T07:43:41Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana_1 | {"type":"log","#timestamp":"2017-06-15T07:43:42Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://localhost:9200/"}
So it seems it still trying to connect using authentication.
I had same xpack issue but with kibana, fixed by following command:
docker run docker.elastic.co/kibana/kibana:5.5.1 /bin/bash -c 'bin/kibana-plugin remove x-pack ; /usr/local/bin/kibana-docker'
so it start container, than removes xpack and after that starts normal process. Same can be done with elasticsearch and logstash.

Bluemix: service bound to container does not appear in VCAP_SERVICES

I'm trying to use IBM Containers for Bluemix to deploy a container and bind it to a Bluemix service.
I start with an existing Bluemix app, which is bound to the MongoDB service I want. I verify that its VCAP_SERVICES environment variable is correctly populated:
$ cf env mamacdon-app
Getting env variables for app mamacdon-app in org mamacdon#ca.ibm.com / space dev as mamacdon#ca.ibm.com...
OK
System-Provided:
{
"VCAP_SERVICES": {
"mongodb-2.4": [
{
"credentials": { /*private data hidden*/ },
"label": "mongodb-2.4",
"name": "mongodb-1a",
"plan": "100",
"tags": [ "nosql", "document", "mongodb" ]
}
]
}
...
Then I run my image in Bluemix using the ice command, with the --bind mamacdon-app argument to bind it to my CF app:
$ ice run --name sshparty \
--bind mamacdon-app \
--ssh "$(cat ~/.ssh/id_rsa.pub)" \ # For SSH access
--publish 22 \ # For SSH access
registry-ice.ng.bluemix.net/ibmliberty:latest
As the name suggests, the image is a trivial example based on the IBM Websphere Liberty docker image -- just enough to let me SSH in and poke around.
At this point, the Containers dashboard tells me that the service has been bound to my container:
But when I finally ssh into the container, the environment does not contain the VCAP_SERVICES variable:
$ ssh -i ~/.ssh/id_rsa root#129.41.232.212
root#instance-000123e2:~# env
TERM=xterm
SHELL=/bin/bash
SSH_CLIENT=[private data hidden]
SSH_TTY=/dev/pts/0
USER=root
LS_COLORS=[omitted]
MAIL=/var/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/root
LANG=en_CA.UTF-8
SHLVL=1
HOME=/root
LOGNAME=root
SSH_CONNECTION=[private data hidden]
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/env
root#instance-000123e2:~#
I expected the VCAP_SERVICES variable to be injected. What am I doing wrong?
I think there is an issue with the way the ssh daemon is getting launched where it does not have visibility to the VCAP_SERVICES environment variable.
However, you can confirm that the container's command will see the variable with following test:
ice run registry-ice.ng.bluemix.net/ibmliberty --bind mamacdon-app --name vcap_services_party printenv; sleep 60
Then, confirm it in the printenv output with ice logs vcap_services_party
Could you give the following a try:
ice run registry-ice.ng.bluemix.net/lintest/tradelite --bind yourappname --name yournewcontainer name
Once the image comes up run the following.
# echo $VCAP_SERVICES
For more info check out the Containers Docs.