This project is generally served with apache but I want to introduce nginx as a front controller to proxy requests through to memcached or fall back to apache if the URI is not found as a key in memcached.
What is happening when I make the request through nginx is I get 404s on every asset. I can paste a single asset URL from a request right in the URL bar and retrieve it, but with a 404 status. The 404s cause most of the page not to render but it seems the assets are being downloaded.
I can make the same request straight through apache and it works perfectly.
Here is my nginx config:
upstream memcached-upstream {
server 127.0.0.1:11211;
}
upstream apache-upstream {
server 127.0.0.1:5678;
}
server {
listen 4567;
root /vagrant;
server_name sc;
index index.php;
access_log /var/log/nginx/www.sc.com.access.log;
error_log /var/log/nginx/www.sc.com.error.log error;
location / {
# Only use this method for GET requests.
if ($request_method != GET ) {
proxy_pass http://apache-upstream;
break;
}
# Attempt to fetch from memcache. Instead of 404ing, use the #fallback internal location
set $memcached_key $request_uri;
memcached_pass memcached-upstream; # Use an upstream { } block for memcached resiliency
default_type application/json; # Our services only speak JSON
error_page 404 = #fallback;
}
location #fallback {
proxy_pass http://apache-upstream;
}
}
here is a sample from my nginx access log:
10.0.2.2 - - [18/Dec/2013:23:50:08 +0000] "GET /templates/assets/js/csrf.js HTTP/1.1" 404 545 "http://localhost:4567/templates/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36"
And the same request from the apache log:
www.sc.com:80 127.0.0.1 - - [18/Dec/2013:23:50:08 +0000] "GET /templates/assets/js/csrf.js HTTP/1.0" 200 857 "http://localhost:4567/templates/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36"
Any help would be much appreciated.
try replacing the error_page with this
error_page 404 =200 #fallback;
Related
I configured nuxt-mail to send emails from our nuxt app.
The baseURL of my app is changed to "https://localhost:3000/app" instead of "https://localhost:3000"
So, nginx redirects all calls to '/' to a static app. And all calls to '/app' to a dynamic app.
The issue is that on production, nuxt-mail is unable to send email through a post to '/app/mail/send'.
I tried with setting axios baseURL on nuxt.config.js as suggested on the nuxt-mail npm/github page
I don't see a path to send or mail in .nuxt/router.js
file: contact.vue
Note: WEBSITE_DOMAIN points to https://localhost:3000 locally and valid web domain on production in this format: https://www.production_website.com
<script>
...
methods: {
...
sendMail(){
this.$axios.post(
this.$config.WEBSITE_DOMAIN+'/app/mail/send',
{
...
}
...
}
...
</script>
file: nuxt.config.js
...
export default{
...
router: {
base: '/app/'
},
...
}
Note: I did configure the upstream logs from nginx to app server
Access log from nginx on production
49.205.150.249 - - [04/May/2022:15:30:54 +0000] "POST /app/mail/send HTTP/1.1" 504 167 "https://www.<xxxxxxxxx_NAME>.com/app/contact"
"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:99.0) Gecko/20100101
Firefox/99.0"
Error log from nginx on production
2022/05/04 15:30:54 [error] 2106#2106: *38 upstream timed out (110:
Connection timed out) while reading response header from upstream,
client: 49.205.150.249, server: <xxxxxxxxx_NAME>.com, request: "POST
/app/mail/send HTTP/1.1", upstream:
"https://<xxxxxxxxx_IP>:3000/app/mail/send", host:
"www.<xxxxxxxxx_NAME>.com", referrer:
"https://www.<xxxxxxxxx_NAME>.com/app/contact"
What am I missing here? It works perfectly on my staging though.
The port allowing SMTP on the production instance was not open. On AWS EC2, I needed to enable outbound rules on the corresponding security group.
We have is very simple Web service.
HTTP/HTTPS request on out service from Internet clients.
In HEAD return UID.
We want to testing our service by Yandex Tank.
load.yml
phantom:
address: 211.81.41.11:443 #IP тестового стенда, порт 443
# address: her.your.ru:443 #IP тестового стенда, порт 443
ssl: true #use https
uris:
- "/"
load_profile:
load_type: rps
schedule: const(5000, 320s) удержание 5000 rps в течение 320 сек
instances: 5000
header_http: "1.1"
headers:
- "[Host: her.your.ru]"
- "[Connection: keep-alive]"
- "[User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36]"
uploader: #модуль для постройки графика
enabled: true
operator: my-username
package: yandextank.plugins.DataUploader
token_file: token.txt
console:
enabled: true
telegraf:
enabled: false
start
docker run --rm -v /opt/docker/yandex.tank/her.your.ru:/var/loadtest -it direvius/yandex-tank
testing is done
check and saw this problem: threads not const and not same RPS.
Why exist this problem? what check on host where I run testing
(4CPU, 16Gb: during testing utilization CPUs less 20%)
and how solve this problem?
When I use openresty to monitor IP through Lua monitor, why does access_by_lua_file call twice when accessing the root directory
Here's how I use it:
http {
access_by_lua_file lua/test.lua;
server{
location / {
default_type text/html;
}
}
}
https://nginx.org/en/docs/http/ngx_http_index_module.html
It should be noted that using an index file causes an internal redirect
That is, the request to the root (/) is internally redirected to the /index.html.
Here is a demo:
http {
access_log /dev/stdout;
access_by_lua_block {
ngx.log(ngx.INFO, ngx.var.uri, ' ', ngx.req.is_internal())
}
server {
listen 8888;
location / {
default_type text/html;
}
}
}
curl localhost:8888/index.html:
2020/08/17 15:14:22 [info] 22411#22411: *5 [lua] access_by_lua(nginx.conf:15):2: /index.html false, client: 127.0.0.1, server: , request: "GET /index.html HTTP/1.1", host: "localhost:8888"
127.0.0.1 - - [17/Aug/2020:15:14:22 +0300] "GET /index.html HTTP/1.1" 200 14 "-" "curl/7.68.0"
curl localhost:8888/:
2020/08/17 15:15:31 [info] 22411#22411: *6 [lua] access_by_lua(nginx.conf:15):2: / false, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost:8888"
2020/08/17 15:15:31 [info] 22411#22411: *6 [lua] access_by_lua(nginx.conf:15):2: /index.html true, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost:8888"
127.0.0.1 - - [17/Aug/2020:15:15:31 +0300] "GET / HTTP/1.1" 200 14 "-" "curl/7.68.0"
I tried making a get call with axios from my Vue js codebase/ environment to Jenkins API and I'm unable to do so.
I've read every resource that I could but wasn't able to fix this particular problem. I even created a .htaccess file to see if it help but wasn't useful.I ran out of options so I came here for help.
Below are the axios codes that I used within my App.vue file.
axios.get(
*URL to access Jenkins that is currently running on a tomcat server*,
{
headers: {
"jenkins-crumb": "* Some numbers and letters*",
},
auth: {
username: "*obvious username*",
password: "*obvious password*"
},
withCredentials: true,
crossdomain: true
}
)
.then(response => (this.info= response)).catch(error => (console.log(error)));
Console log output:
Access to XMLHttpRequest at 'url' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Network output:
General
Request URL: URL
Request Method: OPTIONS
Status Code: 403
Remote Address: localhost:8080
Referrer Policy: no-referrer-when-downgrade
Request Headers
Provisional headers are shown
Access-Control-Request-Headers: authorization,jenkins-crumb
Access-Control-Request-Method: GET
Origin: http://localhost:8080
Referer: http://localhost:8080/
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36
Please help!
here is my problem : Let's say I have some standard Apache logs, like so :
IP1 IP2 - - [13/Jun/2016:14:45:05 +0200] "GET /page/requested.html HTTP/1.1" 200 4860 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:46.0) Gecko/20100101 Firefox/46.0"
I can sucessfully parse these logs with my actual configuration of Logstash :
input {
file {
path => '/home/user/logsDir/*'
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
}
output {
elasticsearch { }
stdout { codec => rubydebug }
}
But on these logs, I apply some machine learning algorithm and I give them a score. So the new log line looks like that :
IP1 IP2 - - [13/Jun/2016:14:45:05 +0200] "GET /page/requested.html HTTP/1.1" 200 4860 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:46.0) Gecko/20100101 Firefox/46.0" 0.00950628507703
Note the 0.00950628507703 at the end of the line, which is the actual score
Now, I would like to parse this line so I could use score for visualisation in Kibana (Logstash is integeated in the whole ELK stack ). So it would be great if the score could be parse as a float.
NB: I can place the score before or after the standard Apache log message and insert any kind of characters between the two (currently it is just a space).
Any idea on how to tackle this problem ?
Thanks in advance !
Eventually I found how to process. I add a little keyword before the score : the word pred
So my lines are know like this :
IP1 IP2 - - [13/Jun/2016:14:45:05 +0200] "GET /page/requested.html HTTP/1.1" 200 4860 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:46.0) Gecko/20100101 Firefox/46.0" pred:0.00950628507703
And I use this configuration for logstash :
input {
file {
path => '/home/user/logsDir/*'
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG} pred:%{NUMBER:prediction_score}"}
}
# I convert the score into a float in order to vizualise it in Kibana
mutate {
convert => {"prediction_score" => "float"}
}
}
output {
elasticsearch { }
stdout { codec => rubydebug }
}
I hope this will help you if you are stuck with the same problem !
Cheers !