Error - "INVALID" is not a valid start token - weblogic

I can see the target status as down and Error "INVALID" is not a valid start token in prometheus console.
I followed the steps below:
Install Prometheus on linux1 machine.
Install weblogic on linux2 machine.
Deploy jar file on weblogic server
Verify Gauge
Add weblogic server entry in prometheus.yml
Re-start prometheus service.
Below are detailed information -
prometheus logs :
level=warn ts=2019-09-06T11:42:42.187Z caller=scrape.go:937 component="scrape manager" scrape_pool=weblogic1 target=http://**********.*.****.*:7001/wls-exporter msg="append failed" err="\"INVALID\" is not a valid start token"
curl output1 :-
-bash-4.2$ curl http://**********.***.****.***:7001/wls-exporter | promtool check metrics
-bash: promtool: command not found
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1783 100 1783 0 0 323k 0 --:--:-- --:--:-- --:--:-- 348k
(23) Failed writing body
-bash-4.2$
curl output2 :-
-bash-4.2$ curl -v --noproxy '*' 'http://**********.***.****.***:7001/wls-exporter'
* About to connect() to **********.***.****.*** port 7001 (#0)
* Trying **.**.***.***...
* Connected to **********.***.****.*** (**.**.***.***) port 7001 (#0)
> GET /wls-exporter HTTP/1.1
> User-Agent: curl/7.29.0
> Host: **********.***.****.***:7001
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Fri, 06 Sep 2019 11:27:23 GMT
< Content-Length: 1783
<
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Weblogic Monitoring Exporter</title>
</head>
<body>
<h2>This is the WebLogic Monitoring Exporter.</h2>
<p>The metrics are found at <a href="/wls-exporter/metrics">
metrics</a> relative to this location.
</p>
<h2>Configuration</h2>
<p>To change the configuration:</p>
<form action="/wls-exporter/configure" method="post" enctype="multipart/form-data">
<input type="radio" name="effect" value="append">Append
<input type="radio" name="effect" value="replace" checked="checked">Replace
<br><input type="file" name="configuration">
<br><input type="submit">
</form>
<p>Current Configuration</p>
<p><code><pre>
host: **********.***.****.***
port: 7001
query_sync:
url: http://coordinator:8999/
refreshInterval: 5
metricsNameSnakeCase: true
domainQualifier: true
restPort: 7001
queries:
- key: name
keyName: server
applicationRuntimes:
key: name
keyName: app
componentRuntimes:
type: WebAppComponentRuntime
prefix: webapp_config_
key: name
values: [deploymentState, contextRoot, sourceInfo, openSessionsHighCount, openSessionsCurrentCount, sessionsOpenedTotalCount, sessionCookieMaxAgeSecs, sessionInvalidationIntervalSecs, sessionTimeoutSecs, singleThreadedServletPoolSize, sessionIDLength, servletReloadCheckSecs, jSPPageCheckSecs]
servlets:
prefix: weblogic_servlet_
key: servletName
values: [invocationTotalCount, reloadTotal, executionTimeAverage, poolMaxCapacity, executionTimeTotal, reloadTotalCount, executionTimeHigh, executionTimeLow]
- JVMRuntime:
prefix: jvm_
key: name
values: [heapFreeCurrent, heapFreePercent, heapSizeCurrent, heapSizeMax, uptime, processCpuLoad]
</pre></code></p>
* Connection #0 to host **********.***.****.*** left intact
-bash-4.2$

The error ""INVALID" is not a valid start token" is usually encountered when Prometheus expects an OpenMetric format but get something else. In this occurrence, the exporter landing page if you omit the /metrics at the en of the URL or if the exporter reports an error page (401 - Authentication required.
Looking into the relevant source code, it seems the authentication token sent by Prometheus to the exporter are forwarded to the Weblogic API. The prometheus config should look like:
- job_name: 'weblogic'
...
basic_auth:
username: weblogic
password: friend
You can test it using curl with the relevant parameters:
curl -u 'weblogic:friend' http://**********.***.****.***:7001/wls-exporter/metrics | promtool check metrics

In my case, my /metrics endpoint accidentally had a \ufeff byte-order mark (because in C# .Net I called new StreamWriter(httpListenerResponse.OutputStream, Encoding.UTF8) rather than new UTF8Encoding(false)). So double check that when you fetch the metrics endpoint (e.g. curl localhost:8080/metrics | hexdump -c), it’s all ASCII and the document looks like the prometheus exposition format

Related

Strange CURL issue with a particular website SSL certificate

I am trying to use CURL to get web pages from a paricualr website however it gives this error:
curl -q -v -A "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" https://www.saiglobal.com/ --output ./Downloads/test.html
....
* SSL certificate verify ok.
} [5 bytes data]
> GET / HTTP/1.1
> Host: www.saiglobal.com
> User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
> Accept: */*
>
0 0 0 0 0 0 0 0 --:--:-- 0:11:53 --:--:-- 0* OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104
* stopped the pause stream!
0 0 0 0 0 0 0 0 --:--:-- 0:11:53 --:--:-- 0
* Closing connection 0
} [5 bytes data]
curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104
I am not sure what is going on. I can't find a lot of useful info regarding to the error message. On my Mac, the errorno is 60 instead of 104.
However, using Chrome on these machines can load the page without any issue. One of the machines' CURL version is 7.58.0.
Any help is appreciated.
The problem is not the certificate of this site. From the debug output it can be clearly seen that the TLS handshake is done successfully and outside this handshake the certificate does not matter.
But, it can be seen that the site www.saiglobal.com is CDN protected by Akamai CDN and Akamai features some kind of bot detection:
$ dig www.saiglobal.com
...
www.saiglobal.com. 45 IN CNAME www.saiglobal.com.edgekey.net.
www.saiglobal.com.edgekey.net. 62 IN CNAME e9158.a.akamaiedge.net.
This bot detection is known to use some heuristics in order to distinguish bots from normal browsers and detection of a bot might result in a status code 403 access denied or in a simple hang of the site - see Scraping attempts getting 403 error or Requests SSL connection timeout.
In this specific case it seems to currently help if some specific HTTP headers are added, specifically Accept-Encoding, Accept-Language, Connection with a value of keep-alive and User-Agent which matches somehow Mozilla. Failure to add these headers or having the wrong values will result in a hang.
The following works currently for me:
$ curl -q -v \
-H "Connection: keep-alive" \
-H "Accept-Encoding: identity" \
-H "Accept-Language: en-US" \
-H "User-Agent: Mozilla/5.0" \
https://www.saiglobal.com/
Note that this deliberately tries to bypass the bot detection. It might stop working if Akamai makes changes to the bot detection.
Please note also that the owner of the site has explicitly enable bot detection for a reason. This means that with deliberately bypassing the detection for your own gain (like providing some service based on scraped information) you might get into legal problems.

"Unable to retrieve API" when exporting tenant api using api-import-export in WSO2 api manager

I am trying to export an api that belongs to a specific tenant in WSO2 api manager. Here is the curl command and output :
[Ananke:: 15:47] [~] > curl -H "Authorization:Basic Blablablaredacted"
-X GET "https://labwso2:9445/api-import-export-v0.9.1/export-api?
name=geo.vdm/GeoTrafic&version=v1.0.0&provider=geoadmin#geo.vdm" -k -vv > GeoTrafic.zip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.96.20.87...
* Connected to labwso2 (labwso2 ip redacted) port 9445 (#0)
* TLS 1.0 connection using TLS_RSA_WITH_AES_256_CBC_SHA
* Server certificate: labwso2
* Server certificate: blablabla
> GET /api-import-export-v0.9.1/export-api?name=geo.vdm/GeoTrafic&version=v1.0.0&provider=geoadmin#geo.vdm HTTP/1.1
> Host: labwso2:9445
> User-Agent: curl/7.43.0
> Accept: */*
> Authorization:Basic Blablablaredacted
>
< HTTP/1.1 404 Not Found
< Cache-Control: private
< Expires: Wed, 31 Dec 1969 19:00:00 EST
< Date: Tue, 01 Dec 2015 20:47:34 GMT
< Content-Type: application/json
< Content-Length: 22
< Server: WSO2 Carbon Server
<
{ [22 bytes data]
100 22 100 22 0 0 200 0 --:--:-- --:--:-- --:--:-- 201
* Connection #0 to host labwso2 left intact
[Ananke:: 15:47] [~] > more GeoTrafic.zip
Unable to retrieve API
[Ananke:: 15:47] [~] >
I have used copy copy and paste for api name and version and checked that they are indeed published and functionnal. I have also tried to tweak the url to add something like /t/geo.vdm to mu link (after importing the importer .war file for the tenant) but to no avail.
How do I specify a tenant api ?
I was able to successfully import an API using import-export tool with below curl command
curl -H "Authorization:Basic <base64-encoded-username-and-password-separated-by-a-colon>" -X GET "https://localhost:9443/api-import-export-v0.9.1/export-api?name=testAPI&version=v1&provider=channa#test.com" -k > myAPI.zip
Here I had to place the api-import-export-v0.9.1.war file inside /repository/deployment/server/webapps folder.
Please make sure above *.war file is deployed correctly.
I used tenant admin's credentials for exporting the API.
Then I imported the myAPI.zip using following curl command:
curl -H "Authorization:Basic YWRtaW46YWRtaW4=" -F file=#"/home/channa/Desktop/myAPI.zip" -k -X POST "https://localhost:9443/api-import-export-v0.9.1/import-api?preserveProvider=false"
Here I had to use "preserveProvider=false" because I exported the API using a different provider.
If you are not able to solve following above steps please share the carbon stacktrace to investigate further.
Can be found at: /repository/logs/wso2carbon.log

"The request is badly formed" while trying to fetch Azure Management Events via REST API

According to that article https://msdn.microsoft.com/en-us/library/azure/dn931934.aspx,
I am trying to fetch the list of events doing following:
$ curl -X GET -H "Content-Type:application/json" -H "Authorization:Bearer $TOKEN" "https://management.azure.com/subscriptions/SUBSCRIP-TI0N-xxxx-xxxx-xxxxxxxxxxxx/providers/microsoft.insights/eventtypes/management/values?api-version=2014-04-01&\$filter=eventTimestamp ge '2014-12-29T22:00:37Z' and eventTimestamp le '2014-12-29T23:36:37Z' and eventChannels eq 'Admin, Operation'" -v
* Hostname was NOT found in DNS cache
* Trying 23.97.164.182...
* Connected to management.azure.com (23.97.164.182) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
* Server certificate: management.azure.com
* Server certificate: Microsoft IT SSL SHA2
* Server certificate: Baltimore CyberTrust Root
> GET /subscriptions/SUBSCRIP-TI0N-xxxx-xxxx-xxxxxxxxxxxx/providers/microsoft.insights/eventtypes/management/values?api-version=2014-04-01&$filter=eventTimestamp ge '2014-12-29T22:00:37Z' and eventTimestamp le '2014-12-29T23:36:37Z' and eventChannels eq 'Admin, Operation' HTTP/1.1
> User-Agent: curl/7.37.1
> Host: management.azure.com
> Accept: */*
> Content-Type:application/json
> Authorization:Bearer {mytokenthere}
>
But the result is:
< HTTP/1.1 400 Bad Request
< Content-Type: text/html; charset=us-ascii
* Server Microsoft-HTTPAPI/2.0 is not blacklisted
< Server: Microsoft-HTTPAPI/2.0
< Date: Tue, 01 Dec 2015 10:06:38 GMT
< Connection: close
< Content-Length: 311
<
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request</h2>
<hr><p>HTTP Error 400. The request is badly formed.</p>
</BODY></HTML>
But it seems to be exactly the same request as in msdn example.
I believe it must be something wrong with filter param, when I amend it the response states that filter parameter is invalid:
curl -X GET -H "Content-Type:application/json" -H "Authorization:Bearer $TOKEN" "https://management.azure.com/subscriptions/SUBSCRIP-TI0N-xxxx-xxxx-xxxxxxxxxxxx/providers/microsoft.insights/eventtypes/management/values?api-version=2014-04-01"
<Error xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/windowsazure"><Code>BadRequest</Code><Message>The $filter query parameter value is invalid.</Message></Error>
I also tried other filter options from the msdn article - the result is the same.
What am I doing wrong?
Oh, so it turns out that I incorrectly called curl and the problem is actually stupid :)
this is the way it works:
curl -H "Accept:application/json" -H "Authorization: Bearer $TOKEN" 'https://management.azure.com/subscriptions/{subscription_id}/providers/microsoft.insights/eventtypes/management/values?api-version=2014-04-01&$filter=eventTimestamp%20ge%20%272015-11-29T22:00:37Z%27'
so %20 is used instead of spaces, and %27 instead of '
and it works!
I believe the problem is in the date range you're specifying. Please note that you can only query for last 90 days of data and you're querying the data for last year:
eventTimestamp ge '2014-12-29T22:00:37Z' and eventTimestamp le
'2014-12-29T23:36:37Z'
Please try by changing the date/time range. Otherwise your query looks fine.

Vagrant is not port forwarding with VirtualBox and PuPHPet

Problem
I cannot connect to my virtual machine's environment, despite using an accepted private network and forwarded ports.
Description
System
MAC OSX 10.9.3
Vagrant 1.6.3
VirtualBox 4.3.12
Vagrantfile file
Puphpet file
Cisco AnyConnect VPN
Using Private Company Network
ifconfig results
Virtual Host file for Web Project.
Upon vagrant up, I vagrant ssh into my virtualbox. The following request work as expected:
[06:04 PM]-[vagrant#precise64]-[~]
$ curl -v 192.168.56.101
* About to connect() to 192.168.56.101 port 80 (#0)
* Trying 192.168.56.101... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: 192.168.56.101
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 09 Jul 2014 18:05:05 GMT
< Server: Apache/2.4.9 (Ubuntu)
< Vary: Accept-Encoding
< Content-Length: 481
< Connection: close
< Content-Type: text/html;charset=UTF-8
<
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html>
<head>
<title>Index of /</title>
</head>
<body>
<h1>Index of /</h1>
<table>
<tr><th valign="top"><img src="/icons/blank.gif" alt="[ICO]"></th><th>Name</th><th>Last modified</th><th>Size</th><th>Description</th></tr>
<tr><th colspan="5"><hr></th></tr>
<tr><th colspan="5"><hr></th></tr>
</table>
</body></html>
* Closing connection #0
[06:05 PM]-[vagrant#precise64]-[~]
$ curl -v playworldsystems.dev
* About to connect() to playworldsystems.dev port 80 (#0)
* Trying 192.168.56.101... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: playworldsystems.dev
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 09 Jul 2014 18:05:52 GMT
< Server: Apache/2.4.9 (Ubuntu)
< X-Powered-By: PHP/5.5.14-2+deb.sury.org~precise+1
< Vary: Accept-Encoding
< Content-Length: 122
< Connection: close
< Content-Type: text/html
<
<pre class='xdebug-var-dump' dir='ltr'><small>string</small> <font color='#cc0000'>'hello'</font> <i>(length=5)</i>
* Closing connection #0
</pre>
However, when I try both commands from within my HOST terminal, I receive the following error:
curl -v 192.168.56.101
* About to connect() to 192.168.56.101 port 80 (#0)
* Trying 192.168.56.101...
* Adding handle: conn: 0x7fc6f1000000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fc6f1000000) send_pipe: 1, recv_pipe: 0
* Failed connect to 192.168.56.101:80; Operation timed out
* Closing connection 0
curl: (7) Failed connect to 192.168.56.101:80; Operation timed out
☁ ~ curl -v playworldsystems.dev
* Adding handle: conn: 0x7fdbe9803000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fdbe9803000) send_pipe: 1, recv_pipe: 0
* About to connect() to playworldsystems.dev port 80 (#0)
* Trying 192.168.56.101...
* Failed connect to playworldsystems.dev:80; Operation timed out
* Closing connection 0
curl: (7) Failed connect to playworldsystems.dev:80; Operation timed out
Even after trying cURLing from both port 6969 and 8080, I still have no success. I've used random IP Addresses as well as Ports. I've tried altering my virtual host to other port numbers. Nothing seems to work. I will mention that when first starting out, I noticed that this vagrant setup has worked --both only twice. Each time it was working, i would vagrant suspend and vagrant up the next morning to find that my prior solution is no longer working.
Perhaps this fact is what makes this process so frustrating. I want to get of MAMP for my current projects but fear that it is either my work machine's settings that are interfering or perhaps some other application or network related issue. I'm unsure what steps to take and am looking forward to any and all solutions.
a few tips/pointers.
check that the firewall rules are not blocking the ports, if all else fails, just disable it for testing "iptables -F"
make sure ports are forwarded correctly https://docs.vagrantup.com/v2/networking/forwarded_ports.html
so what you want for http is host:8080 guest:80
sometimes your IP address changes on your host ( for example, acquiring new dhcp or joining new network home/work/vpn )
check that you can ping your VM guest from your host via the correct IP from the correct adapter. On the host run "ifconfig -a" (on the mac there should be some en0 for ethernet and en1 for airport) if you can ping the vm guest, then you can hit the web server via port 8080 ( http://mylocal.dev:8080/ ) provided of course that you edited your /etc/hosts to point your vhost to the above mentioned IP address.

Google Cloud MissingSecurityHeader error

While there is a thread about this problem on Google's FAQ, it seems like there are only two answers that have satisfied other users. I'm certain there is no proxy on my network and I'm pretty sure I've configured boto as I see credentials in the request.
Here's the capture from gsutil:
/// Output sanitized
Creating gs://64/...
DEBUG:boto:path=/64/
DEBUG:boto:auth_path=/64/
DEBUG:boto:Method: PUT
DEBUG:boto:Path: /64/
DEBUG:boto:Data: <CreateBucketConfiguration><LocationConstraint>US</LocationConstraint></\
CreateBucketConfiguration>
DEBUG:boto:Headers: {'x-goog-api-version': '2'}
DEBUG:boto:Host: storage.googleapis.com
DEBUG:boto:Params: {}
DEBUG:boto:establishing HTTPS connection: host=storage.googleapis.com, kwargs={'timeout':\
70}
DEBUG:boto:Token: None
DEBUG:oauth2_client:GetAccessToken: checking cache for key dc3
DEBUG:oauth2_client:FileSystemTokenCache.GetToken: key=dc3 present (cache_file=/tmp/o\
auth2_client-tokencache.1000.dc3)
DEBUG:oauth2_client:GetAccessToken: token from cache: AccessToken(token=ya29, expiry=2\
013-07-19 21:05:51.136103Z)
DEBUG:boto:wrapping ssl socket; CA certificate file=.../gsutil/third_party/boto/boto/cace\
rts/cacerts.txt
DEBUG:boto:validating server certificate: hostname=storage.googleapis.com, certificate ho\
sts=['*.googleusercontent.com', '*.blogspot.com', '*.bp.blogspot.com', '*.commondatastora\
ge.googleapis.com', '*.doubleclickusercontent.com', '*.ggpht.com', '*.googledrive.com', '\
*.googlesyndication.com', '*.storage.googleapis.com', 'blogspot.com', 'bp.blogspot.com', \
'commondatastorage.googleapis.com', 'doubleclickusercontent.com', 'ggpht.com', 'googledri\
ve.com', 'googleusercontent.com', 'static.panoramio.com.storage.googleapis.com', 'storage\
.googleapis.com']
GSResponseError: status=400, code=MissingSecurityHeader, reason=Bad Request, detail=A non\
empty x-goog-project-id header is required for this request.
send: 'PUT /64/ HTTP/1.1\r\nHost: storage.googleapis.com\r\nAccept-Encoding: iden\
tity\r\nContent-Length: 98\r\nx-goog-api-version: 2\r\nAuthorization: Bearer ya29\r\nU\
ser-Agent: Boto/2.9.7 (linux2)\r\n\r\n<CreateBucketConfiguration><LocationConstraint>US</\
LocationConstraint></CreateBucketConfiguration>'
reply: 'HTTP/1.1 400 Bad Request\r\n'
header: Content-Type: application/xml; charset=UTF-8^M
header: Content-Length: 232^M
header: Date: Fri, 19 Jul 2013 20:44:24 GMT^M
header: Server: HTTP Upload Server Built on Jul 12 2013 17:12:36 (1373674356)^M
It looks like you might not have a default_project_id specified in your .boto file.
It should look something like this:
[GSUtil]
default_project_id = 1234567890
Alternatively, you can pass the -p option to the gsutil mb command to manually specify a project. From the gsutil mb documentation:
-p proj_id Specifies the project ID under which to create the bucket.