Send logs directly to Loki without use of agents - api

Is there a way to send logs to Loki directly without having to use one of it's agents?
For example, if I have an API, is it possible to send request/response logs directly to Loki from an API, without the interference of, for example, Promtail?
Thanks in advance!

Loki HTTP API
Loki HTTP API allows pushing messages directly to Grafana Loki server:
POST /loki/api/v1/push
/loki/api/v1/push is the endpoint used to send log entries to Loki.
The default behavior is for the POST body to be a snappy-compressed
protobuf message:
Protobuf definition
Go client library
Alternatively, if the Content-Type header is set to application/json,
a JSON post body can be sent in the following format:
{
"streams": [
{
"stream": {
"label": "value"
},
"values": [
[ "<unix epoch in nanoseconds>", "<log line>" ],
[ "<unix epoch in nanoseconds>", "<log line>" ]
]
}
]
}
You can set Content-Encoding: gzip request header and post gzipped
JSON.
Example:
curl -v -H "Content-Type: application/json" -XPOST -s "http://localhost:3100/loki/api/v1/push" --data-raw \
'{"streams": [{ "stream": { "foo": "bar2" }, "values": [ [ "1570818238000000000", "fizzbuzz" ] ] }]}'
So it is easy to create JSON-formatted string with logs and send it to the Grafana Loki.
Libraries
There are some libraries implementing several Grafana Loki protocols.
There is also (my) zero-dependency library in pure Java 1.8, which implements pushing logs in JSON format to Grafana Loki. Works on Java SE and Android platform:
https://github.com/mjfryc/mjaron-tinyloki-java
Security
Above API doesn't support any access restrictions as written here - when using over public network, consider e.g. configuring Nginx proxy with HTTPS from Certbot and Basic Authentication.

Yes. You can send logs directly from a Java application to loki.
It can be done using the loki4j configuration in your java springboot project. Add these below dependencies to pom.xml
<dependency>
<groupId>com.github.loki4j</groupId>
<artifactId>loki-logback-appender</artifactId>
<version>1.2.0</version>
</dependency>
Run loki either directly or from docker depending on how you have installed loki on your system. I use docker instances of loki and grafana.
Create a logback.xml in your springboot project with the following contents
<property name="HOME_LOG" value="app.log" />
<appender name="FILE-ROLLING"
class="com.github.loki4j.logback.Loki4jAppender">
<http>
<url>http://localhost:3100/loki/api/v1/push</url>
</http>
<format>
<label>
<pattern>app=my-app,host=${HOSTNAME},level=%level</pattern>
</label>
<message>
<pattern>l=%level h=${HOSTNAME} c=%logger{20} t=%thread | %msg %ex
</pattern>
</message>
<sortByTime>true</sortByTime>
</format>
</appender>
<logger name="com.vasanth.loki" level="debug" additivity="false">
<appender-ref ref="FILE-ROLLING" />
</logger>
<root level="error">
<appender-ref ref="FILE-ROLLING" />
</root>
</configuration>
Configure your logger names in the above example and make sure you have given the proper loki URL - You are basically telling the application to write logs into an output stream going directly to the loki URL instead of the traditional way of writing logs to a file through log4j configuration and then using promtail to fetch these logs and load into loki.

Related

GCP API Gateway with an API Key fails with 403 error stating ... .cloud.goog is not enabled for the project

First things first, let me show you some of my gcloud settings. When I run gcloud config list, this is my output:
[core]
account = <SERVICE ACCOUNT NAME>#<PROJECT NAME>.iam.gserviceaccount.com
disable_usage_reporting = True
project = <PROJECT NAME>
Your active configuration is: [default]
When I run gcloud services list, this is my output:
apigateway.googleapis.com API Gateway API
artifactregistry.googleapis.com Artifact Registry API
bigquery.googleapis.com BigQuery API
bigquerymigration.googleapis.com BigQuery Migration API
bigquerystorage.googleapis.com BigQuery Storage API
cloudapis.googleapis.com Google Cloud APIs
cloudbuild.googleapis.com Cloud Build API
clouddebugger.googleapis.com Cloud Debugger API
cloudfunctions.googleapis.com Cloud Functions API
cloudresourcemanager.googleapis.com Cloud Resource Manager API
cloudtrace.googleapis.com Cloud Trace API
containerregistry.googleapis.com Container Registry API
datastore.googleapis.com Cloud Datastore API
eventarc.googleapis.com Eventarc API
iam.googleapis.com Identity and Access Management (IAM) API
iamcredentials.googleapis.com IAM Service Account Credentials API
logging.googleapis.com Cloud Logging API
monitoring.googleapis.com Cloud Monitoring API
oslogin.googleapis.com Cloud OS Login API
pubsub.googleapis.com Cloud Pub/Sub API
run.googleapis.com Cloud Run Admin API
secretmanager.googleapis.com Secret Manager API
servicecontrol.googleapis.com Service Control API
servicemanagement.googleapis.com Service Management API
serviceusage.googleapis.com Service Usage API
source.googleapis.com Legacy Cloud Source Repositories API
sql-component.googleapis.com Cloud SQL
storage-api.googleapis.com Google Cloud Storage JSON API
storage-component.googleapis.com Cloud Storage
storage.googleapis.com Cloud Storage API
sts.googleapis.com Security Token Service API
I have an API Gateway with the following config file:
swagger: '2.0'
info:
title: <API TITLE>
description: API Gateway First for Sphrn Testing
version: 1.0.0
securityDefinitions:
api_key_header:
type: apiKey
name: x-api-key
in: header
schemes:
- https
produces:
- application/json
paths:
/entrypoint1:
post:
summary: Simple echo service
operationId: <OPERATION ID HERE>
x-google-backend:
address: https://<CLOUD FUNCTION NAME>-<STRING I DON'T RECOGNIZE>-uc.a.run.app
security:
- api_key_header: []
responses:
'200':
description: OK
I call the api from my command line with this script:
curl --location --request POST 'https://<API CALLABLE ENDPOINT>.uc.gateway.dev/endpoint1' \
--header 'X-goog-api-key: <MY API KEY HERE>' \
--header 'Content-Type: application/json; charset=utf-8' \
--data-raw '{
"name": "Test1"
}'
but it fails with this in my terminal:
{"code":403,"message":"PERMISSION_DENIED:API <SERVICE ACCOUNT NAME>-<STRING I DON'T RECOGNIZE>.apigateway.<PROJECT NAME>.cloud.goog is not enabled for the project."}
My API key looks like this:
And I went into the logs explorer for the API Gateway endpoint and this is the more detailed logs from my 403 failed curl command (sanitized for identifying information of course):
{
"httpRequest": {
"latency": "0.040s",
"protocol": "http",
"remoteIp": "<MY IP ADDRESS>",
"requestMethod": "POST",
"requestSize": "1053",
"requestUrl": "/endpoint1",
"responseSize": "346",
"status": 403
},
"insertId": "<LONG GUID LOOKING STRING>#a1",
"jsonPayload": {
"api_key": "<MY API KEY>",
"api_key_state": "NOT ENABLED",
"api_method": "1.<API ID>_<STRING I DON'T RECOGNIZE>_apigateway_<PROJECT NAME>_cloud_goog.<OPERATIONID FROM CONFIG YAML>",
"api_name": "1.<API ID>_<STRING I DON'T RECOGNIZE>_apigateway_<PROJECT NAME>_cloud_goog",
"api_version": "1.0.0",
"error_cause": "API <API ID>_<STRING I DON'T RECOGNIZE>.apigateway.<PROJECT NAME>.cloud.goog is not enabled for the project.",
"http_status_code": 403,
"location": "us-central1",
"log_message": "1.<API ID>_<STRING 1 I DON'T RECOGNIZE>_apigateway_<PROJECT NAME>_cloud_goog.<OPERATIONID FROM CONFIG YAML> is called",
"producer_project_id": "<PROJECT NAME>",
"response_code_detail": "service_control_check_error{SERVICE_NOT_ACTIVATED}",
"service_agent": "ESPv2/2.40.0",
"service_config_id": "<CONFIGURATION ID>",
"timestamp": "<TIMESTAMP HERE AS DECIMAL>"
},
"logName": "projects/<PROJECT NAME>/logs/<API ID>_<STRING I DON'T RECOGNIZE>.apigateway.<PROJECT NAME>.cloud.goog%2Fendpoints_log",
"receiveTimestamp": "<TIMESTAMP HERE AS STRING>",
"resource": {
"labels": {
"location": "us-central1",
"method": "1.<API ID>-<STRING I DON'T RECOGNIZE>_apigateway_<PROJECT NAME>_cloud_goog.<OPERATIONID FROM CONFIG YAML>",
"project_id": "<PROJECT NAME>",
"service": "<API ID>-<STRING I DON'T RECOGNIZE>.apigateway.<PROJECT NAME>.cloud.goog",
"version": "1.0.0"
},
"type": "api"
},
"severity": "ERROR",
"timestamp": "<TIMESTAMP HERE AS STRING>"
}
So how do I get this curl to succeed...? I'm assuming it's a permissions issue, but what permission does my service account not have?
When I run:
gcloud projects get-iam-policy <PROJECT ID> \
--flatten="bindings[].members" \
--format='table(bindings.role)' \
--filter="bindings.members:<SERVICE ACCOUNT NAME>#<PROJECT NAME>.iam.gserviceaccount.com"
I get this output:
ROLE
roles/cloudfunctions.serviceAgent
roles/serviceusage.serviceUsageViewer
I had to enable the service by using my actual "master" Gmail account with which I created the GCP project and enabling the service <SERVICE ACCOUNT NAME>-....apigateway.<PROJECT NAME>.cloud.goog via gcloud commands. Then I had 1 more problem where I didn't enable the operationId listed in my openapi config yaml file in the API key restrictions menu.
I'm assuming anyone reading this has already logged in with their service account via gcloud auth login and activated their relevant service account with gcloud auth activate-service-account <SERVICE ACCOUNT NAME>#<PROJECT NAME>.iam.gserviceaccount.com --key-file=/path/to/keyfile.json
Enable Service Fix
I switched my gcloud account to my "master" account with gcloud config set account <MASTER GCLOUD ACCOUNT NAME>#gmail.com, then:
gcloud services enable <SERVICE ACCOUNT NAME>-....apigateway.<PROJECT NAME>.cloud.goog \
--project=<PROJECT ID (THE NUMBER NOT THE TEXT NAME>
This made it so calling the API with my API key in the header give me a new error {"message":"PERMISSION_DENIED: The API targeted by this request is invalid for the given API key.","code":403}
operationId API Restriction Menu Fix
I had to enable the operationId listed in my openapi config yaml file in the API key restrictions menu. After that it appeared in the "Selected APIs" section of the API Key Credentials page:
After making this change, my curl request:
curl --location --request POST 'https://<API CALLABLE ENDPOINT>.uc.gateway.dev/endpoint1' \
--header 'X-goog-api-key: <MY API KEY HERE>' \
--header 'Content-Type: application/json; charset=utf-8' \
--data-raw '{
"name": "Test1"
}'
worked perfectly!

MongooseIm Rest API connection issue with local Setup

I have setup the mongooseim [3.3.0] ubuntu 14.04 & it works perfect with android client setup. Then I need to test the REST API for creating room , Then I have got this error when running the swagger documentation.
curl -X GET --header 'Accept: application/json' --header 'Authorization: Basic dXNlcjpwYXNzd29yZA==' 'http://localhost:8089/api/rooms'
curl: (52) Empty reply from server
This is the mongooseim configs which related to REST API.
{ 8089 , ejabberd_cowboy, [
{num_acceptors, 10},
{transport_options, [{max_connections, 1024}]},
{protocol_options, [{compress, true}]},
{ssl, [{certfile, "priv/ssl/fake_cert.pem"}, {keyfile, "priv/ssl/fake_key.pem"}, {password, ""}]},
{modules, [
{"_", "/api/sse", lasse_handler, [mongoose_client_api_sse]},
{"_", "/api/messages/[:with]", mongoose_client_api_messages, []},
{"_", "/api/contacts/[:jid]", mongoose_client_api_contacts, []},
{"_", "/api/rooms/[:id]", mongoose_client_api_rooms, []},
{"_", "/api/rooms/[:id]/config", mongoose_client_api_rooms_config, []},
{"_", "/api/rooms/:id/users/[:user]", mongoose_client_api_rooms_users, []},
{"_", "/api/rooms/[:id]/messages", mongoose_client_api_rooms_messages, []}
]}
]}
This is the swagger document I have referred. https://mongooseim.readthedocs.io/en/3.3.0/swagger/index.html
I noticed the following things:
The curl example you provided tries to send the request to MongooseIM over HTTP
Based on the part of the config file you provided I can see that MongooseIM expects HTTPS traffic.
It looks like changing the endpoint in your curl command to https://localhost:8089/api/rooms will help. Of course, if you run the command on the same machine as MongooseIM is running. Otherwise, please change the localhost to a proper name or IP address of the machine.
What's more, in the config file I can see that the REST API is configured with the default, fake and self-signed certificates. I strongly encourage you to change it to real certificates. For the sake of testing, you will need to add the option -k to your curl command in order to skip cert verification.

Elasticsearch logging with NLog fails in ASP Net Core API

I making some tests for logging into an Elasticsearch instance using NLog in our API. The Elasticsearch instance is running inside a Docker, if the API is executed using IIS Express I can log into Elasticsearch without a problem and I can look at the "logstash" index created, but if I run the API inside a Docker container the logs never reach Elasticsearch and the index is never created.
My NLog config:
<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
autoReload="true"
throwConfigExceptions="true"
internalLogLevel="info"
internalLogFile="c:\temp\internal-nlog-AspNetCore3.txt">
<extensions>
<add assembly="NLog.Targets.ElasticSearch"/>
</extensions>
<targets>
<target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000">
<target xsi:type="ElasticSearch"/>
</target>
</targets>
<rules>
<logger name="*" minlevel="Trace" writeTo="ElasticSearch" />
<logger name="Microsoft.*" maxlevel="Info" final="true" />
</rules>
</nlog>
And in my appsettings.json:
"ElasticsearchUrl": "http://192.168.0.9:9200",
Perhaps I'm missing something or I'm not understanding the interaction between the containers.
(1) Your question doesn't provide any details about the configuration of the two containers (one running your app, one running Elasticsearch).
I have an example logging to Elasticsearch, configured with Kibana to view the results, although it uses a different logger provider (Essential.LoggerProvider.Elasticsearch), however it has a docker-compose file that shows the connection between Elasticsearch and Kibana, https://github.com/sgryphon/essential-logging/tree/master/examples/HelloElasticsearch
# Docker Compose file for E-K stack
# Run with:
# docker-compose up -d
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.6.1
...
networks:
- elastic-network
kibana:
image: docker.elastic.co/kibana/kibana-oss:7.6.1
...
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
networks:
- elastic-network
networks:
elastic-network:
driver: bridge
The relevant parts show setting up a network bridge between the two docker machines, and then the connection between them.
While "http://192.168.0.9:9200" might be the correct connection from outside (your IIS) into Elasticsearch, you would have to check if it is how your API docker sees the Elasticsearch machine, e.g. how Kibana sees Elasticsearch in the example above is "http://elasticsearch:9200"
You would need to update the question with details of your docker configuration, e.g. the command line you run to start them, or a docker compose file, etc. to work out why they can't see each other.
(2) You might want to check that it really is working from IIS, as it seems unusual that NLog would create an index "logstash-" ... normally Logstash would create that index and NLog should create it's own, e.g. log4net creates index "log-", Essential.LoggerProvider.Elasticsearch uses "dotnet-", etc.
Disclaimer: I am the author of Essential.LoggerProvider.Elasticsearch

New User Register option is not coming over UI

I have installed api man as defined in
http://www.apiman.io/latest/download.html
I performed following instructions.
mkdir ~/apiman-1.2.5.Final
cd ~/apiman-1.2.5.Final
curl http://download.jboss.org/wildfly/10.0.0.Final/wildfly-10.0.0.Final.zip -o wildfly-10.0.0.Final.zip
curl http://downloads.jboss.org/apiman/1.2.5.Final/apiman-distro-wildfly10-1.2.5.Final-overlay.zip -o apiman-distro-wildfly10-1.2.5.Final-overlay.zip
unzip wildfly-10.0.0.Final.zip
unzip -o apiman-distro-wildfly10-1.2.5.Final-overlay.zip -d wildfly-10.0.0.Final
cd wildfly-10.0.0.Final
./bin/standalone.sh -c standalone-apiman.xml
after this i can login as a admin that is predefined and create organisation, apis and rest.
but at login page New User Registration option is not coming.
here login page snap
How can i get new user register option ? .I am using apache tomcat.
Here is snap what is missing
"Register?New User" option is not coming
Rationale
In our WildFly distributions we use Keycloak for identity management and auth; it's all rolled into a single server including all of apiman's components and Keycloak. However, Keycloak can't run on Tomcat, so by default our Tomcat quickstart just uses tomcat's inbuilt auth mechanisms (which you can configure to use LDAP, JDBC, etc).
So, if you want Keycloak plus apiman, you need to do a little bit of extra work. However, this brings a lot of capabilities, so it's likely worth it for real deployments.
Bear in mind that this is slighly verbose to describe, but actually rather quick to implement.
Naturally, just using the WildFly all-in-one might be less hassle, especially for a quick test :-).
I'll add this to the apiman documentation shortly.
Using Keycloak IDM with apiman on Tomcat
Get Keycloak running
Download Keycloak, and run. Create your administrative user and log in.
Import the apiman Keycloak realm. This is just a demo walkthrough, you'll want to regenerate the keys and secrets for production :-).
For the clients apiman and apimanui, modify your Valid Redirect URIs to be the absolute URLs to your apiman instance(s) (e.g. http://myapiman.url:8080/apimanui/*).
Prepare Tomcat
The generic instructions are available in the Keycloak documentation, but I'll endeavour to provide more specialised config information.
Download and extract keycloak-tomcat8-adapter-dist into the global lib directory of Tomcat.
Modify apiman
Extract apiman.war, apimanui.war, and apiman-gateway-api.war and add the following:
META-INF/context.xml
In apiman.war:
<Context path="/apiman">
<Valve className="org.keycloak.adapters.tomcat.KeycloakAuthenticatorValve"/>
</Context>
In apimanui.war
<Context path="/apimanui">
<Valve className="org.keycloak.adapters.tomcat.KeycloakAuthenticatorValve"/>
</Context>
In apiman-gateway-api.war
<Context path="/apiman-gateway-api">
<Valve className="org.keycloak.adapters.tomcat.KeycloakAuthenticatorValve"/>
</Context>
WEB-INF/keycloak.json
In apiman.war:
{
"realm": "apiman",
"resource": "apiman",
"realm-public-key": "<YOUR REALM'S PUBLIC KEY>",
"auth-server-url": "http://localhost:9080/auth",
"ssl-required": "none",
"use-resource-role-mappings": false,
"enable-cors": true,
"cors-max-age": 1000,
"cors-allowed-methods": "POST, PUT, DELETE, GET",
"bearer-only": false,
"enable-basic-auth": true,
"expose-token": true,
"credentials" : {
"secret" : "<APIMAN SECRET HERE, IF ANY>"
},
"connection-pool-size": 20,
"principal-attribute": "preferred_username"
}
In apimanui.war, config as above, but with:
{
"realm": "apiman",
"resource": "apimanui",
"realm-public-key": "<YOUR REALM'S PUBLIC KEY>",
...
"credentials" : {
"secret" : "<APIMANUI SECRET HERE, IF ANY>"
},
"principal-attribute": "preferred_username"
}
In apiman-gateway-api.war, config as above, but with:
{
"realm": "apiman",
"resource": "apiman-gateway-api",
"realm-public-key": "<YOUR REALM'S PUBLIC KEY>",
...
"credentials" : {
"secret" : "<APIMAN-GATEWAY-API SECRET HERE, IF ANY>"
},
"principal-attribute": "preferred_username"
}
WEB-INF/web.xml
For all of the above, replace the login-config section with:
<login-config>
<auth-method>BASIC</auth-method>
<realm-name>apiman</realm-name>
</login-config>
Other issues
You may want to copy over themes (or make your own). It's rather easy, but out of the scope of this response.
if you are using apache tomcat the keycloak web application isn't deployed with the apiman tomcat overlay. Instead the users and passwords are defined in the tomcat/conf/tomcat-users.xml file, there you can include new users, but as far as i know you cant create new users via the apimanui.

IBM Worklight 6.1 - Runtime: Http request failed: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated

I'm using IBM worklight 6.1 and backbone.js for my mobile app project. I got this error message when I try invoke the adapter.
Orders.xml
<?xml version="1.0" encoding="UTF-8"?>
<wl:adapter name="Orders"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:wl="http://www.worklight.com/integration"
xmlns:http="http://www.worklight.com/integration/http">
<displayName>Orders</displayName>
<description>Orders</description>
<connectivity>
<connectionPolicy xsi:type="http:HTTPConnectionPolicyType">
<protocol>https</protocol>
<domain>izify.com</domain>
<port>443</port>
</connectionPolicy>
<loadConstraints maxConcurrentConnectionsPerNode="2" />
</connectivity>
<procedure name="getOrders"> </procedure>
</wl:adapter>
Orders-impl.js
function getOrders() {
var input = {
method : 'get',
returnedContentType : 'json',
path : "api/izify-api/admin/get_all_orders.php",
parameters:{merchantId:"74718912a2c0d82feb2c14604efecb6d"}
};
return WL.Server.invokeHttp(input);
}
ERROR message
{
"errors": [
"Runtime: Http request failed: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated"
],
"info": [
],
"isSuccessful": false,
"warnings": [
]
}
Thanks a lot in advance.
I got the answer for my problem.
Clean worklight development server
Deploy worklight adapter
No issue regarding to SSL.
Done
Sometimes this exception occurs when the JVM doesn't trust the certificate. It's one of several symptoms of a problem negotiating the SSL/https connection.
Sometimes this happens when the remote server has an issue with its SSL certificate. However, from my end I am unable to recreate with an Oracle 1.7 JVM with unmodified trust stores - I can retrieve https://izify.com/api/izify-api/admin/get_all_orders.php and get back a response.
I also verified with a 3rd party certificate checker that there are no problems with the izify.com SSL certificate (other than it expires soon, but that won't be a problem for a few months now). Please run this from your end and confirm the IP address they resolve matches what you do.
Then, check that your WL server's HTTP requests to izify.com aren't going through some sort of proxy that is redirecting or otherwise interrupting the SSL connection (for example, Fiddler or development proxy).
I solved this problem by ensuring Eclipse is pointing to Java 7 as opposed to Java 6.