(Code is locally tested on a mac)
I've created a simple test for Redis, but it, unfortunately, fails with [1] 27996 segmentation fault:
// Config in App.php
// Had to rename Redis, because of the use of phpredis!
'RedisManager' => Illuminate\Support\Facades\Redis::class,
//Test unit
use Illuminate\Support\Facades\Redis;
/** #test */
public function it_can_set_a_dummy_value_in_redis()
{
$value = 'Hello';
Redis::set($this->cacheKey, $value, 'EX', 60);
$this->assertEquals($value, Redis::get($this->cacheKey));
}
It fails when the command set is fired.
I went through the code up to the point: Illuminate\Redis\Connections\PhpRedisConnection.php#command
I checked what the object-value of $this->client is:
Redis {#2278
isConnected: true
host: "127.0.0.1"
port: 6379
auth: null
mode: ATOMIC
dbNum: 0
timeout: 0.0
lastError: null
persistentId: null
options: {
TCP_KEEPALIVE: 0
READ_TIMEOUT: 0.0
COMPRESSION: NONE
SERIALIZER: NONE
PREFIX: "local_database_"
SCAN: NORETRY
}
}
I check the connection with:
dd($this->client->ping()); // it's true
However, it fails at the point:
return parent::command($method, $parameters); //set is called
Even though it is in a try-catch-block, it won't be shown there...
The logs show nothing:
nginx: 0 errors
php-fpm: 0 errors
redis:
2867:M 29 Aug 2020 22:46:03.142 * Background saving started by pid 27498
27498:C 29 Aug 2020 22:46:03.145 * DB saved on disk
2867:M 29 Aug 2020 22:46:03.306 * Background saving terminated with success
So even Redis is working correctly.
I don't know what is going on. When I change to predis, everything works fine!
The only hint I have got is this thread. But since my ping() is ponged, it should work?
Any ideas how I can fix my problem?
Related
I'm trying to scrape some websites, but for some reason it works locally (localhost) with express, but not when I've deployed it to lambda. Tried w/ the ff serverless-http and aws-serverless-express and serverless-express plugin. Also tried switching between axios and superagent.
Routes work fine, and after hrs of investigating, I've narrowed the problem down to the fetch/axios bit. When i don't add a timeout to axios/superagent/etc, the app just keeps running and timing out at 15/30 sec, whichever is set and get an error 50*.
service: scrape
provider:
name: aws
runtime: nodejs10.x
stage: dev
region: us-east-2
memorySize: 128
timeout: 15
plugins:
- serverless-plugin-typescript
- serverless-express
functions:
app:
handler: src/server.handler
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
protected async fetchHtml(uri: string): Promise<CheerioStatic | null> {
const htmlElement = await Axios.get(uri, { timeout: 5000 });
if(htmlElement.status === 200) {
const $ = Cheerio.load(htmlElement && htmlElement.data || '');
$('script').remove();
return $;
}
return null;
}
As far as i know, the default timeout of axios is indefinite. Remember, API gateway has hard limit of 29 sec timeout.
I had the same issue recently, sometimes the timeouts are due to cold starts. So I basically had to add a retry logic for the api call in my frontend react application.
I'm using varnish-3.0.6-1 on one host and tomcat8 on another.
Tomcat is running fine but for some reason I can't get varnish backend to be healthy.
Here's my config:
probe healthcheck {
.url = "/openam/";
.timeout = 3 s;
.interval = 10 s;
.window = 3;
.threshold = 2;
.expected_response = 302;
}
backend am {
.host = "<INTERNAL-IP>";
.port = "8090";
.probe = healthcheck;
}
sub vcl_recv {
if (req.request == "PURGE") {
if (!client.ip ~ purgers) {
error 405 "You are not permitted to PURGE";
}
return(lookup);
}
else if (req.http.host == "bla.domain.com" || req.http.host == "<EXTERNAL-IP>") {
set req.backend = am;
}
else if (req.url ~ "\.(ico|gif|jpe?g|png|bmp|swf|js)$") {
unset req.http.cookie;
set req.backend = lighttpds;
}
else {
set req.backend = apaches;
}
}
It always shows:
Backend_health - am Still sick 4--X-R- 0 2 3 0.001956 0.000000 HTTP/1.1 302
telnet works fine to that host, the only thing that I can't figure it out is that curl returns 302 and that's because main page under 'openam' on tomcat redirects to another page.
$ curl -I http://<INTERNAL-IP>:8090/openam/
HTTP/1.1 302
Location: http://<INTERNAL-IP>:8090/openam/config/options.htm
Transfer-Encoding: chunked
Date: Tue, 12 Sep 2017 15:00:24 GMT
Is there a way to fix that problem?
Any advice appreciated,
Thanks
Based on the information provided, you're hitting on this bug of Varnish 3.
The reporter had 3.0.7 and that's the only one available now in packaged releases so you will likely have to build from sources.
So considering that, and Varnish 3 being quite old, I would rather recommend to upgrade to newer Varnish 4 or 5. (It's always easier to rewrite a few lines of VCL than maintaining something that was compiled from sources and the hassle associated with making sure it's always up-to-date).
Another obvious solution would be adjusting your app to send HTTP reason along the code, or perhaps point to the final redirect location which might (or not) already provide reason in HTTP status.
Check whether curl -IL http://<INTERNAL-IP>:8090/openam/config/options.htm provides a reason in the output.
If it's something like HTTP/1.1 200 OK and not just HTTP/1.1 200 then simply update your health check to that URL (naturally adjust expected response code as well).
I follow this Plain MAC-Auth setup guide to configure the freeradius (version 2.2.5), in order to carry out MAC Authentication. However, MAC authentication is failed with the following log message
rad_recv: Access-Request packet from host 192.168.0.7 port 59966, id=9, length=79
NAS-IP-Address = 192.168.0.7
User-Name = "34:76:C5:57:0F:A3"
User-Password = "34:76:C5:57:0F:A3"
# Executing section authorize from file /etc/freeradius/sites-enabled/default
+group authorize {
++[preprocess] = ok
++policy rewrite.calling_station_id {
+++? if ((Calling-Station-Id) && "%{Calling-Station-Id}" =~ /^%{config:policy.mac-addr}$/i)
?? Evaluating (Calling-Station-Id) -> FALSE
? Skipping ("%{Calling-Station-Id}" =~ /^%{config:policy.mac-addr}$/i)
+++? if ((Calling-Station-Id) && "%{Calling-Station-Id}" =~ /^%{config:policy.mac-addr}$/i) -> FALSE
+++else else {
++++[noop] = noop
+++} # else else = noop
++} # policy rewrite.calling_station_id = noop
[authorized_macs] expand: %{Calling-Station-Id} ->
++[authorized_macs] = noop
++? if (!ok)
? Evaluating !(ok) -> TRUE
++? if (!ok) -> TRUE
++if (!ok) {
+++[reject] = reject
++} # if (!ok) = reject
+} # group authorize = reject
Using Post-Auth-Type REJECT
WARNING: Unknown value specified for Post-Auth-Type. Cannot perform requested action.
Delaying reject of request 0 for 1 seconds
Going to the next request
Waking up in 0.9 seconds.
Sending delayed reject for request 0
Sending Access-Reject of id 9 to 192.168.0.7 port 59966
Waking up in 4.9 seconds.
Cleaning up request 0 ID 9 with timestamp +30
Ready to process requests.
From the above log, the problem seems to be unable to get the "Calling-Station-Id" value. Is this a freeradius configuration problem? And anyone know how to solve it?
on the account section of the radius config add
update request {
Called-Station-Id += &NAS-Port-Id
}
and in the post-auth section add
update reply {
Called-Station-Id += &NAS-Port-Id
}
I'm trying to migrate my Parse server to my own server instance in DigitalOcean. After deploying my parse-server I'm falling in some issue I can't understand.
When you make a call to the Cloud Code, you can retrieve your user as request.user if you have revocable sessions enabled.
Everything is OK, but sometimes (random times) I get this strange behaviour: my request.user doesn't appear in Cloud Code.
I thought it could be a bad session token so I got rid of it by doing:
if (!request.user) {
response.error("INVALID_SESSION_TOKEN");
return;
}
and obbligate the user to log-in again.
This wasn't working, I was getting an INVALID_SESSION_TOKEN everytime I log in, so I decided to debug. These are my steps:
1.- Log in my user, so a _Session object is created:
so the sessionToken is r:a425239d4184cd98b9b693bbdedfbc9c
2.- Make call cloud function (sniff log):
POST /parse-debug/functions/getHomeAudios HTTP/1.1
X-Parse-OS-Version: 6.0.1
X-Parse-App-Build-Version: 17
X-Parse-Client-Key: **** (hidden)
X-Parse-Client-Version: a1.13.0
X-Parse-App-Display-Version: 1.15.17
X-Parse-Installation-Id: d7ea4fa0-b4dc-4eff-9b7d-ff53a1424dcb
User-Agent: Parse Android SDK 1.13.0 (com.pronuntiapp.debug.uat/17) API
Level 23
X-Parse-Session-Token: r:a425239d4184cd98b9b693bbdedfbc9c
X-Parse-Application-Id: **** (hidden)
Content-Type: applicati¡á“WÇX�
Content-Length: 346
Host: 46.101.89.192:1338
Connection: Keep-Alive
Accept-Encoding: gzip
3.- request.user is still not appearing on CloudCode.
EDIT: Reseting the parse-server worked in this case, but not in some others.
Days ago I got the solution.
When you have successfully deployed your Parse server, you will get request.user from any end point of the cloud, but if you call a cloud function from cloud, you won't get this request.user at least you pass the sessionToken:
Parse.Cloud.define("foo", function(request, response) {
if (!request.user) {
response.error("INVALID_SESSION_TOKEN");
return;
}
var countResponses = 0;
var responsesNeeded = 1;
Parse.Cloud.run('bar', request.params, {
sessionToken: request.user.getSessionToken(),
success: function(c) {
countResponses++;
result = c;
if (countResponses >= responsesNeeded) {
response.success(result);
}
},
error: function(error) {
response.error(error);
}
});
});
in this case, foo will have request.user and bar won't, unless you pass sessionToken.
I am setting up lsyncd at for automatic sync local and remote folders. I have researched for many solution available, also adding extra params to the conf file.
I have also, updated the sshd_config with PermitRootLogin without-password
Also, I am able to ssh with password and also rsync without password manually tried but the problem is when I use it via lsyncd it give permission denied error 3 times and exit (seems like its asking for password).
lsyncd.conf.lua file
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status",
statusInterval = 10
}
sync {
default.rsync,
source="/home/gaurav/Desktop/source/",
target="root#xxx.xxx.xx.xxx:/root/destination/",
rsync = {
compress = true,
acls = true,
verbose = true,
_extra = {"-P", "-e", "/usr/bin/ssh -p 22 -i /home/gaurav/.ssh/id_rsa -o StrictHostKeyChecking=no"}
}
}
Also tried with this one also.
settings = {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status"
}
sync {
default.rsyncssh,
source = "/home/gaurav/Desktop/source/",
host = "xxx.xxx.xx.xxx",
targetdir = "/root/destination/"
}
Logs
Sun Dec 7 17:18:09 2014 Normal: recursive startup rsync: /home/gaurav/Desktop/source/ -> root#xxx.xxx.xx.xxx:/root/destination/
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.1]
Sun Dec 7 17:18:12 2014 Error: Temporary or permanent failure on startup of "/home/gaurav/Desktop/source/". Terminating since "insist" is not set.
If you are using ubuntu 12.04 you must use rsyncOps instead rsync = {} block.
Try this:
sync {
default.rsync,
source="/var/www/",
target=server..":/var/www/",
excludeFrom="/etc/lsyncd/lsyncd-excludes.txt",
rsyncOps={"-e", "/usr/bin/ssh -o StrictHostKeyChecking=no", "-avz"}
}
https://www.stephenrlang.com/2015/12/how-to-install-and-configure-lsyncd/