How to remove analytics logs in mobilefirst 6.3 - ibm-mobilefirst

We are working in a mobilefirst 6.3 project, and our .war is installed in a liberty profile server.
We didn't configure the TTL on the analytics before. is there any way (tool, rest service or file-system) that I can remove the analytics logs in mobilefirst.

MobileFirst Platform Foundation Analytics uses ElasticSearch and Lucene at its core - there is nothing special to be done from a MobileFirst perspective.
If you want to remove everything, the whole Analytics store:
Stop the Analytics server
Delete the "analyticsData" folder which is under servers/<server-name>/ in the Liberty installation
Restart the server
Otherwise, using either CURL or Postman you can invoke the DELETE query.
You can find the ElasticSearch API here: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html
Some additional questions about this topic in Stack Overflow:
Removing Data From ElasticSearch
Delete all documents from index/type without deleting type
http://www.tekkie.ro/quick-n-dirty/howto-quickly-erase-all-documents-from-an-elasticsearch-index/
Example steps:
Open the ES port - MobileFirst uses port 9500,
In the Analytics server set the JNDI property http.enabled=true and restart the Analytics server (if it's a cluster, you still only need to open the port on one of the cluster members)
The default "index" to use in your query is "worklight", and the mappings are documented in the user documentation, and are shown on the admin tab in the Analytics console
The endpoint for your delete query would need to be the Analytics server
Postman example query:
DELETE
http://your-analytics-server:9500/worklight/network_transactions/_query
{
"query": {
"range": {
"worklight_data.timestamp": {
"to": 1432313605000
}
}
}
}
CURL example query:
curl -X DELETE 'http://server:9500/worklight/network_transactions/_query' (http://server:9500/worklight/network_transactions/_query%27) -d '{ "query" : { "range" : { "timestamp" : { "lte" : "1432222333424" } } } }'

Related

Can I use Serilog Enrichers to get Machine / Hostname Name?

I have been searching on how I can get client hostname/computer name using ASP.Net Core MVC using .net 3.1 .
There are threads here on SO but most of them are not working on intranet apps (clients used VPN to connect to network).
I've seen some suggestion on some thread to use Serilog Enrichment.
My question is how can I use this one? Can I really get the Machine Name (value passed on the application to db) using this plugin?
You should be able to. Try this configuration on:
"Using": [ "Serilog.Enrichers.ClientInfo" ],
"Enrich": [ "WithMachineName", "WithClientIp" ]
You'd need the nuget package Serilog.Enrichers.ClientInfo
UPDATE:
You can also write the logs directly to the database using one of the database sinks. For eg, to write to a MSSQL database, you could use the sink here: Serilog.Sinks.MSSqlServer
This gives you control over what properties and you want to log including Client Info

How YARN does check health of hadoop nodes in YARN web console

I would like to know how YARN Web UI running at port 8088 consolidates the Datanodes,Namenodes and other cluster components health status.
For example, this is what i see when i open the Web UI.
Hi guy, your all datanodes are healthy.
The ResourceManager REST API's allow the user to get information about the cluster - status on the cluster, metrics on the cluster, scheduler information, information about nodes in the cluster, and information about applications on the cluster.
The below example is taken from the official documentation.
Request:
GET http://<rm http address:port>/ws/v1/cluster/info
Response:
{
"nodes":
{
"node":
[
{
"rack":"\/default-rack",
"state":"NEW",
"id":"h2:1235",
"nodeHostName":"h2",
"nodeHTTPAddress":"h2:2",
"healthStatus":"Healthy",
"lastHealthUpdate":1324056895432,
"healthReport":"Healthy",
"numContainers":0,
"usedMemoryMB":0,
"availMemoryMB":8192,
"usedVirtualCores":0,
"availableVirtualCores":8
},
{
"rack":"\/default-rack",
"state":"NEW",
"id":"h1:1234",
"nodeHostName":"h1",
"nodeHTTPAddress":"h1:2",
"healthStatus":"Healthy",
"lastHealthUpdate":1324056895092,
"healthReport":"Healthy",
"numContainers":0,
"usedMemoryMB":0,
"availMemoryMB":8192,
"usedVirtualCores":0,
"availableVirtualCores":8
}
]
}
}
More information can be found from the below link
https://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html
I hope this helps.

'Unauthorized' to push images into SSL Artifactory Docker Registry

Im sorry if this topic is duplicated, I was not able to find anything similar to this problem.
Our docker clients v17.X + (Docker for Mac & Docker for Linux) are unable to push images under a SSL V2 Registry but are successfully authenticated for pushes under an Insecure V2 Registry (CNAME) that serves the same machine. The output is always the same: unauthorized even if I docker login correctly.
The weird thing is: for our old docker clients (v1.6) we are able to login and push Docker images to a secure v2 Docker registry without any problem using the credentials file stored at ~/.dockercfg. My Nginx appears to be working just fine. Any ideas about what I'm missing here?
_
Im attaching both credentials configuration files, if anyone wants to check:
Docker client: v.17
~/.docker/config.json
{
"auths" : {
"https://secure-docker-registry.intranet": {
"auth": "someAuth",
"email": "somemail#gmail.com"
}
},
"credsStore" : "osxkeychain"
}
Obs: In Docker for Mac's case I tried with 'credsStore' and without it
Obs2: Even allowing anonymous to push images, I'm still getting an unauthorized for this registry.
Obs3: Logs are not very clean about this problem
Obs4: Artifactory is configured using a LDAP Group
Docker client: v.1.6.2
~/.dockercfg
{
"secure-docker-registry.intranet": {
"auth": "someAuth",
"email": "somemail#gmail.com"
},
"insecure-docker-registry.intranet": {
"auth": "someAuth",
"email": "somemail#gmail.com"
}
}
Artifactory Pro's version: 5.4.2

How to create a user provided redis service which spring auto configuration cloud connectors picks?

I have created a user provided service for redis as below
cf cups p-redis -p "{\"host\":\"xx.xx.xxx.xxx\",\"password\":\"xxxxxxxx\",\"port\":6379}"
This not getting picked automcatically by the redis auto reconfiguration or the service connectors and getting jedis connection pool exception.
When I bind to the redis service created from the market place it works fine with the spring boot application. This confirms there is no issue with the code or configuration. I wanted a custom service for the redis to work with the spring boot app. How can i create such service? What am i missing here? Is this possible?
System-Provided:
{
"VCAP_SERVICES": {
"user-provided": [
{
"credentials": {
"host": "xx.xx.xxx.xxx",
"password": "xxxxxxxx",
"port": 6379
},
"label": "user-provided",
"name": "p-redis",
"syslog_drain_url": "",
"tags": []
}
]
}
}
I could extend the abstract cloud connector and create redis factory myself but i want to make it work out of the box with custom service and auto configuration.
All routes to mapping this service automatically lead to the spring-cloud-connectors project. If you look at the implementation, services must be either tagged with redis or expose a uri with a redis scheme from credential keys based on a permutation of uri.
If you'd like additional detection behavior, I'd recommend opening an issue in the GitHub repo.
What worked for me:
cf cups redis -p '{"uri":"redis://:PASSWORD#HOSTNAME:PORT"}' -t "redis"
Thanks to earlier answers that led me to this solution.

Worklight v6 iwap

I set up Worklight V6 Server and IWAP.
I found my worklight app console has analytics tab and there is IWAP console.
there is dashboad view, search view, search log view, geo analytics view.
then I put WL.Logger.error and WL.Analytics.log code in my app and issued these logs
but I cannot find any data on my IWAP console.
and there is following NumberFormatException in my IWAP logs. can I fix this?
[2013-06-24 18:02:35,998][DEBUG][action.search.type ] [Rattler] [worklight][7], node[M8YymIEGQbae4fbtkc2cyA], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest#465651a2]
org.elasticsearch.search.SearchParseException: [worklight][7]: from[0],size[-1],sort[<custom:"worklight_data.timestamp": org.elasticsearch.index.field.data.longs.LongFieldDataType$1#79b8644>!]: Parse Failure [Failed to parse source [{"sort": {"worklight_data.timestamp": {"order": "desc"}}, "from": 0, "script_fields": {}, "facets": {}, "query": {"query_string": {"query": "worklight_data.log.message:* AND worklight_data.timestamp:[NaN TO * ]"}}, "size": 1000}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:566)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:481)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:466)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:236)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:141)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:205)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:192)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:614)
at java.lang.Thread.run(Thread.java:779)
Caused by: java.lang.NumberFormatException: For input string: "NaN"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:76)
at java.lang.Long.parseLong(Long.java:452)
at java.lang.Long.parseLong(Long.java:494)
at org.elasticsearch.index.mapper.core.LongFieldMapper.rangeQuery(LongFieldMapper.java:176)
at org.apache.lucene.queryParser.MapperQueryParser.getRangeQuerySingle(MapperQueryParser.java:342)
at org.apache.lucene.queryParser.MapperQueryParser.getRangeQuery(MapperQueryParser.java:331)
at org.apache.lucene.queryParser.QueryParser.Term(QueryParser.java:1496)
at org.apache.lucene.queryParser.QueryParser.Clause(QueryParser.java:1319)
at org.apache.lucene.queryParser.QueryParser.Query(QueryParser.java:1275)
at org.apache.lucene.queryParser.QueryParser.TopLevelQuery(QueryParser.java:1234)
at org.apache.lucene.queryParser.QueryParser.parse(QueryParser.java:206)
at org.elasticsearch.index.query.QueryStringQueryParser.parse(QueryStringQueryParser.java:212)
at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:188)
Please check that the following:
1 - Are you using the developer edition? In Worklight v6, the Analytics console will not function in the developer edition. This may be changed in future releases. I have never seen that exception before, but I wonder if it occurs as a result of trying to run with the developer edition.
2 - In your initOptions.js, analytics has been set to true:
analytics : { enabled: true }
3 - In worklight.properties, you should set the queue size to 1 so that analytics are immediately seen on the console
wl.analytics.queue.size=1
Important Note The queue size should only be set to 1 in testing mode, as using 1 will not scale in production mode.
If you continue to have issues, please post more information such as the calls you are making to WL.Analytics.log and your worklight.properties for the analytics.
EDIT
If you are running Worklight 6.0 in development mode, you will need the following flag in worklight.properties:
wl.analytics.debug=true