How to fix 502 server error in splunk logs - splunk

I am getting the below error in splunk logs. Can some one tell me how to debug this or any clue on what is causing the issue?
May 14 16:23:57 localhost AWSStackName=PUB-DEV1-UI-UIService-xxxx Env=dev1 ContainerId=xxxx [3910]: Xforward=159.250.76.150, 23.58.156.207, 204.2.243.129 Rlog=- Ruser=- [14/May/2022:16:23:57 +0000] "GET /consumer/healthinfo/keep_alive?q=23 HTTP/1.1" FinalStatus=502 ResponseByte=504 Referer="https://www.webmdhealth.com/hra/Questionnaire.aspx" UsrAgent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.4 Safari/605.1.15" ResponseTime=55778ms AkamaiBot="-"
ResponseTime = 55778ms
eventtype = nix-all-logs
host = ip-xxxxx
index = portal_aws
linecount = 1
source = /usr/docker/231072.231072/containers.log
sourcetype = portal:log
splunk_server = xxxxxxx

Related

Error 401 fetching Swagger UI files when use authentication via Kong

When I have auth active (bearer auth mechanism), configured in Kong, I see these errors in the Kong logs when I try to access the Swagger UI documentation route (/api-docs):
94.23.216.213 - - [30/Sep/2022:10:47:12 +0100] "GET /api-core/api-docs HTTP/1.1" 200 3044 "-" "insomnia/2022.5.1"
94.23.216.213 - - [30/Sep/2022:10:47:14 +0100] "GET /api-core/swagger-ui.css HTTP/1.1" 401 26 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Insomnia/2022.5.1 Chrome/102.0.5005.63 Electron/19.0.3 Safari/537.36"
94.23.216.213 - - [30/Sep/2022:10:47:14 +0100] "GET /api-core/swagger-ui-bundle.js HTTP/1.1" 401 26 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Insomnia/2022.5.1 Chrome/102.0.5005.63 Electron/19.0.3 Safari/537.36"
94.23.216.213 - - [30/Sep/2022:10:47:14 +0100] "GET /api-core/swagger-ui-init.js HTTP/1.1" 401 26 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Insomnia/2022.5.1 Chrome/102.0.5005.63 Electron/19.0.3 Safari/537.36"
94.23.216.213 - - [30/Sep/2022:10:47:14 +0100] "GET /api-core/swagger-ui-standalone-preset.js HTTP/1.1" 401 26 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Insomnia/2022.5.1 Chrome/102.0.5005.63 Electron/19.0.3 Safari/537.36"
and the documentation route returns a blank page.
Disabling authentication already works fine.
My files:
src/app.js
const swaggerFile = fs.readFileSync('../swagger/docs/rtc_core_api.yaml', 'utf8');
const swaggerDoc = yaml.load(swaggerFile);
const options = {
explorer: true,
}
app.get('/swagger-ui-init.js', swaggerUi.serve);
app.get('/swagger-ui.css', swaggerUi.serve);
app.get('/swagger-ui-bundle.js', swaggerUi.serve);
app.get('/swagger-ui-standalone-preset.js', swaggerUi.serve);
app.use('/api-docs', swaggerUi.serve, swaggerUi.setup(swaggerDoc, options));
How can I solve the problem?

System.Net.WebException: 'The remote server returned an error: (463).' [duplicate]

This question already has answers here:
How solve HTTP request failed! HTTP/1.1 463?
(2 answers)
Closed 3 years ago.
Im trying to make a tool that checks if a user exists but i get the error 463.
url im using (https://www.habbo.nl/habbo-imaging/avatarimage?hb=image&user=123)
Public Sub checkAccount()
Dim request As System.Net.HttpWebRequest = System.Net.HttpWebRequest.Create("https://www.habbo.nl/habbo-imaging/avatarimage?hb=image&user=" + userToCheck)
Dim repsonse As System.Net.HttpWebResponse = request.GetResponse()
Dim sReader As System.IO.StreamReader = New System.IO.StreamReader(repsonse.GetResponseStream)
Dim Habboresult As String = sReader.ReadToEnd()
If Habboresult.Contains("HTTP Status 404 – Not Found") Then
'add user to listbox of available names
freeName()
Else
'add user to listbox of names that are already in use
usedName()
End If
End Sub
Image of the error
Even I’m not agree with your approach (in checking for new available user names), in order to fix your code and doing it running you have to add this instruction .UserAgent after New declaration of "request" Object (like code below shows)
Dim request As System.Net.HttpWebRequest = CType(System.Net.HttpWebRequest.Create("https://www.habbo.nl/habbo-imaging/avatarimage?hb=image&user=123"), Net.HttpWebRequest)
request.UserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246"
Staying in context (always based in you code) is wasted invoking all those classes for a downloaded string (are you shure you need to treat this as a string instead of bytes?? then if byte.length > 0…...).
Instead you can use three lines of code which are (for string data):
Dim client As Net.WebClient = New Net.WebClient()
client.Headers.Add("User-Agent" , "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246")
Dim reply As String = client.DownloadString("https://www.habbo.nl/habbo-imaging/avatarimage?hb=image&user=123")
Or (for bytes data to convert in an image or testing it length)
Dim client As Net.WebClient = New Net.WebClient()
client.Headers.Add("User-Agent" , "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246")
Dim imageBytes = client.DownloadData("https://www.habbo.nl/habbo-imaging/avatarimage?hb=image&user=123")
Edit: Check this link out https://stackoverflow.com/a/49956632/12808204 see if it helps.
I may be speaking from ignorance here, as networking isn't my forté, but the error range 452-499 isn't defined by an official RFC, and so what 463 means is likely implementation specific. Some cursory googling seems to support this, that this range is used for own defined error codes(But don't take my word as law on this). 4xx errors do generally refer to client errors though, i.e there may be an issue with your request. Maybe check that the string argument for System.Net.HttpWebRequest.Create() is correct? Break it out to its own variable, and make sure userToCheck is actually defined when the function checkAccount() is called.
Without more info about the site or API you're interfacing with, I don't have more to give. Provide some more background info?

Scrapy. How to resolve 520?

This website response
DEBUG: Crawled (520) <GET https://ddlfr.pw/> (referer: None)
How can i resolve this ?
I post my code for explain
import json
from scrapy import Spider, Request, Selector
class LoginSpider(Spider):
name = 'ddlfr.pw'
start_urls = ['https://ddlfr.pw/index.php?do=search']
numero = 0
def parse(self, response):
global numero
return scrapy.FormRequest.from_response(
response,
headers = {'user-agent' : 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'},
formdata= {'dosearch': 'Rechercher', 'story': 'musso', 'do': 'search' , 'subaction': 'search', 'search_start': str(self.numero) , 'full_search': '0', 'result_form': '1'},
callback=self.after_login,
dont_filter = True
)
def after_login(self, response):
for title in response.xpath('//div[#class="short nl nl2"]'):
yield {'roman': title.extract()}
yes because the web require valid browser's headers. while scrapy send headers as a bot.
Try to use these headers:
headers = {
'user-agent' : 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'
}
You can see crawled status over your website
I suggest that you monitor what your web browser does when you send the form from the web browser (Network tab of the developer tools), and try to reproduce the request with Scrapy.
In Firefox, for example, you can copy the successful request from the Network tab as a curl command, which is a clear representation of the request.

Error 403 Forbidden not User-Agent

I've tried looking at previous posts on the same subject but none of the solutions seem to be working and I'd like to confirm that there is indeed nothing I can do to get around this.
I'm a journalist trying to download permit data from off the planning authority's website. I could do this no problem up till a few months ago but the website has been changed and after adapting my code to the new site, I now seem to be getting an Error 403 every time I try to follow links on the site.
Any help would be greatly appreciated.
My code -not the best looking or most efficient, but I'm self taught and use coding mainly for scraping data for work - stats on the page: http://www.pa.org.mt/padecisionSearch?date=1/31/2018%2012:00:00%20AM
In the bit of code I have pasted beneath I am trying to access each link permit link (first one on page: http://www.pa.org.mt/PACaseDetails?Systemkey=200414&CaseType=PA/10351/17%27) in order to scrape permit details.
While I can generate the link addresses without a problem (they are accessible by clicking the link), sending a request to the address returns:
b'\r\nForbidden\r\n\r\nForbidden URL\r\nHTTP Error 403. The request URL is forbidden.\r\n\r\n'
I've tried changing the User-Agent, and I've also tried to put in a timer between requests but nothing seems to have any effect.
Any suggestions would be very welcome
My code:
import requests
import pandas as pd
import csv
from bs4 import BeautifulSoup
from datetime import date, timedelta as td
import pandas as pd
from collections import Counter
import numpy as np
import matplotlib.pyplot as plt
import urllib
with requests.Session() as s:
#s.headers.update(head)
r= s.get("http://www.pa.org.mt",data=None, headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"})
page = (s.get("http://www.pa.org.mt/padecisionSearch?date=1/31/2018%2012:00:00%20AM", data=None, headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/63.0.3239.132 Safari/537.36"}).content)
soup = BeautifulSoup(page, 'html.parser')
search_1 = soup.find_all('table')
for item in search_1:
item1 = item.find_all('tr')
for item2 in item1:
item3 = item2.find_all('td', class_ = 'fieldData')
for element in item3:
list2.append(element.text)
zejt_number = (len(list2)/6)
zi = element.find_all('a')
if len(zi) == 0 and ((len(list2)-1)%5 == 0 or len(list2) == 1):
case_status.append("")
applicant.append("")
architect.append("")
application_type.append("")
case_category.append("")
case_officer.append("")
case_officer2.append("")
date_approved.append("")
application_link.append("")
elif len(zi) != 0:
for li in zi:
hyperlink = "http://www.pa.org.mt/"+li.get('href')
application_link.append(hyperlink)
print(hyperlink)
z = (s.get(hyperlink, data=None, headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"}).content)
print(z)
first of all your code is a bit messy. is it all your code? or just a part of it? e.g. you are importing pandas twice. nevertheless your main problem why this is not working is the hyperlinks you are generating:
for li in zi:
hyperlink = "http://www.pa.org.mt/"+li.get('href')
print(hyperlink)
the result looks like this:
http://www.pa.org.mt/../PACaseDetails?Systemkey=200414&CaseType=PA/10351/17'
this is link won't work. a quick workaround would be to edit the hyperlink before you do the request:
for li in zi:
hyperlink = "http://www.pa.org.mt/"+li.get('href')
hyperlink = hyperlink.replace('../', '')
print(hyperlink)
z = (s.get(hyperlink, data=None, headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"}).content)
print(z)
the hyperlinks now should look like this:
http://www.pa.org.mt/PACaseDetails?Systemkey=200414&CaseType=PA/10351/17'
and the request should pass through.

In winston for Node.js is there a way to suppress log level from message?

I'm using winston to stream log messages from Express based on various comments elsewhere, my setup is essentially:
var express = require("express"),
winston = require("winston");
// enable web server logging; pipe those log messages through winston
var requestLogger = new (winston.Logger)(
{
transports: [
new (winston.transports.File)(
{
filename: "logs/request.log",
json: false,
timestamp: false
}
)
]
}
),
winstonStream = {
write: function(message, encoding) {
requestLogger.info(message.replace(/(\r?\n)$/, ''));
}
};
this.use(express.logger({stream: winstonStream}));
But I'd like a way to suppress the output of the log level because I know for this particular logger it will always be "info". So rather than:
info: 127.0.0.1 - - [Fri, 20 Sep 2013 13:48:02 GMT] "POST /v1/submission HTTP/1.1" 200 261 "http://localhost:8887/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.65 Safari/537.36"
I would get:
127.0.0.1 - - [Fri, 20 Sep 2013 13:48:02 GMT] "POST /v1/submission HTTP/1.1" 200 261 "http://localhost:8887/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.65 Safari/537.36"