Installing drone on a sub-directory/path - drone.io

I am trying to install drone in a particular path of a domain, but it still tries to find it's resources in root directory. It's behind a Caddy Server.
Example:-
Want to install in
http://abc.xyz.com/drone/
While it should find its resources(static,api,feed) likw
http://abc.xyz.com/drone/static/app.css
It tries to find it in
http://abc.xyz.com/static/app.css
I can write URL re-writes but it seems counter intuitive, and could be prone to errors.
How do I correct this?
My current caddyfile:-
https://abc.xyz.com {
proxy /drone/ http://127.0.0.1:8000 {
transparent
websocket
}
}

Related

Issue with Vue.js devServer.proxy

So in an application that I'm working on currently I'm trying to proxy requests from a Vue-frontend, to an Express-server. The express server is running on localhost:5000. This is in my vue.config.js:
"devServer": {
"proxy": {
"^/api/": {
"target": "http://localhost:5000",
"ws": true,
"changeOrigin": true
}
}
},
What I can't wrap my head around is why in most cases, sending out requests to my server with just api/routename works just fine. Then in only 2 components it doesn't work. In the components where it does work, a GET request looks like this for example:
axios.get('api/base/verified')
.then...
Then in two other components, according to the same principle of just requesting to api/route-name, I'm getting errors. In development mode, the requests then all of a sudden go out to http://localhost:8080/api..., and when trying to deploy, they go to 5000, but I get this error message:
xhr.js:184 GET http://localhost:5000/api/content/course/5f54f3c60bb7a30017c1abf2 net::ERR_CONNECTION_REFUSED
Does anyone have any idea about what the deal is, and why the proxying is acting so differently, depending on environment and component?
You should always use the url with a leading slash e.g. /api/base/verified See this question. So I guess in your case that 2 components may use in different path.
The reason it's different for different environments is that the proxy is working only in development server that's why they called devServer.
I'm not sure why you got that error it seems there must be another config that can go to port 5000 such as proxy in nginx but ERR_CONNECTION_REFUSED usually means there is no server at that endpoint.

Cannot POST api/system/sessions of Graylog2 on CentOS 7

I have an working installation of Graylog 2.1 on Debian 8, but I had to install Graylog on CentOS 7 because my datacenter uses this distribution and I want to have same environment to avoid problems when I need to ask changes in production.
I follow guideline of Graylog for CentOS 7 available in http://docs.graylog.org/en/2.1/pages/installation/os/centos.html and installed Graylog 2.1.2. MongoDB, ElasticSearch e Graylog are running and answer to local requests via terminal. However, web interface is not available. Login page is presented, but when I try to connect using admin user, I receive this answer:
Error - the server returned: 404 - cannot POST http://mydomain:9000/api/system/sessions (404)
Below are lines that I changed into server.conf of Graylog (I replaced real IP address here):
rest_listen_url = http://4.8.15.16:9000/api/
rest_transport_uri = http://4.8.15.16:9000
web_listen_uri = http://4.8.15.16:9000/
I have searched for references about this fail and created a graylog-settings.json file based on suggestion of Graylog github issues, with this content:
"custom_attributes": {
"graylog-server": {
"rest_transport_url": false
}
}
But event after restarting server, the problem continues. Graylog log only shows INFO records, then it seems to me that requests are not reaching server. I would like to know if this is due to network configuration or can be solved by an adjustment of Graylog.
Your rest_transport_uri looks odd in comparison with rest_listen_uri. Make sure that you actually need to set rest_transport_uri at all and that it is the correct setting.
I don't know where you found information about graylog-settings.json, but that file is only being used in the official Omnibus package (i. e. the OVA and AMIs).

How can I replace the server in Web Component Tester

I have a project set up based around the Polymer Starter Kit, which includes Web-Component-Tester
This project includes php server code which I would also like to test by writing tests to run in the browser which will utilise the PHP server code through Ajax Calls.
This implies replacing the server that Web Component Tester is using ONLY when testing server side code. I hope to make a separate gulp task for this.
Unfortunately, I don't understand the relationship between WCT, Selenium and what ever server is run currently. I can see that WCT command starts Selenium, but I can't find out what the web server is and how that is started. I suspect it is WCT, because there is configuration of the mapping of directories to urls, but other than that I haven't a clue, despite trying to read the code.
Can someone explain how I go about making it run its own server when testing the client, but relying on an already set up web server (nginx) when running the server. I can set nginx to run from local host, or an other domain if that is a way to choose a different configuration.
EDIT: I have now found that runner/webserver.js starts an express server, and that urls get mapped so the base directory for the test runner and the bower_components directory both get mapped to the /components url.
What is currently confusing me is in what circumstances this gets run. It appears that loading plugins somehow does it, but my understanding from reading the code for this is tenuous.
The answer is that web component tester itself has a comment in the runner/config.js file.
In wct-conf.js, you can use registerHooks key into the Object that gets returned to add a function that does
registerHooks: function(wct) {
wct.hook('prepare:webserver', function(app, done) {
var proxy = require('express-http-proxy');
app.use('/api',
proxy('pas.dev', {
forwardPath: function(req, res) {
return require('url').parse(req.url).path;
}
})
);
done();
});
This register hook function allows you to provide a route (/api in my case) which this proxies to a server which can run the php scripts.

SSL meteor Not Working.. Stuck in Spinner (loads nav bar and sidebar but nothing else)

I've been having an issue for days and I don't know how to fix it.
I am trying to setup my SSL certificate, and for some reason the site works on http, and then when I try to load https, it loads only the navbar and sidebar, and then it's stuck on the spinner.
When I examine at the network connections on chrome, it keeps trying to load xhr and websockets.
In safari I get this error in the console
WebSocket connection to 'wss://mysite/sockjs/530/72iokiqa/websocket' failed: WebSocket is closed before the connection is established
I am trying to set the headers, in particular the x-forwarded-proto header, but I can't figure out how to do that.
I am using mup.
// Configure environment
"env": {
"ROOT_URL": "https://inslim.com"
},
"ssl": {
"pem": "./ssl.pem"
}
For some reason, when I try to add a por to the env variable, it won't allow me to do mup deploy. It will break and the site will go down.
I am also confused with nginx. I installed it and I set it up, but I don't think it's making any difference. If I run 'service nginx stop' or service nginx start, it doesn't make any difference.
Can someone help me? Any advice or anything would help. Or if you need any other info please let me know.
Here's a screenshot of my spinner of death
The ssl part of your configuration JSON looks fine, but your env part needs a little modification. The env part of the configuration JSON should at least look something like this:
"env": {
"PORT": 80, // Defaults to 80, but could be different if app is configured differently
"ROOT_URL": "http://inslim.com"
}
If you do not have the force-ssl package already added to your application, I would suggest adding that (don't worry, it is a core Meteor package). If you do not also have the spiderable package added to your application, then your ROOT_URL element in your JSON can remain prefixed with http, but if you do have the spiderable package added to your application, you will need to change that ROOT_URL element prefix in your JSON to be https. All of this information is per the documentation for Meteor Up, which can be found here. Also, I can confirm that this setup with the JSON works because I have a production application that is running with this exact setup without any issues.

Grails generating proper links when deployed behind proxy

Consider the following setup for a deployed Grails application.
the Grails application is deployed on a Tomcat server (tomcat7)
in front of Tomcat an Apache webserver is deployed
Apache does SSL offloading, and acts as a proxy for Tomcat
So far a quite standard setup, which I have used succesfully many times. My issue is now with the links generated by the Grails application, especially those for the redirects (the standard controller redirects, which occur all the time e.g. after succesfully posting a form).
One configuration is different from all the other applications so far: in this Grails application no serverURL is configured. The application is a multi-tenant application, where each tenant is given it's own subdomain. (So if the application in general is running under https://www.example.com, a tenant can use https://tenant.example.com.) Subdomains are set automagically, that is without any configuration at DNS or Apache level. Grails can do so perfectly, by leaving out the serverURL property in Config.groovy: it then resolves all url's by inspecting the client host.
However: when generating redirect-url's, Grails is not aware the website is running under https. All redirect url's start with http... I guess this is no surprise, because nowhere in the application it is configured we are using https: there is no serverURL config, and technically the application is running on the standard http port of Tomcat, because of the SSL offloading and proxying by Apache.
So, bottom line: what can I do to make Grails generate proper redirects?
I have tried to extend the DefaultLinkGenerator and override the makeServerURL() method. Like this:
class MyLinkGenerator extends DefaultLinkGenerator {
MyLinkGenerator(String serverBaseURL, String contextPath) {
super(serverBaseURL, contextPath)
}
MyLinkGenerator(String serverBaseURL) {
super(serverBaseURL)
}
def grailsApplication
/**
* #return serverURL adapted to deployed scheme
*/
String makeServerURL() {
// get configured protocol
def scheme = grailsApplication.config.grails.twt.baseProtocol ?: 'https://'
println "Application running under protocol $scheme"
// get url from super
String surl = super.makeServerURL()
println "> super.makeServerURL(): $surl"
if (surl) {
// if super url not matching scheme, change scheme
if (scheme=='https://' && surl?.startsWith('http://')) {
surl = scheme + surl?.substring(7)
println "> re-written: $surl"
}
}
return surl
}
}
(Maybe not the most beautiful code, but I hope it explains what I'd like to do. And I left out the bit about configuring this class in resources.groovy.)
When running this code strange things happen:
In the log you see the code being executed, and a changed url (http > https) being produced, but...
The redirect sent to the browser is the unchanged url (http)
And even worse: all the resources in the generated views are crippled: they now all start with // (so what should be a relative "/css/myapp.css" is now "//css/myapp.css")
Any help or insight would be appreciated!
Grails version is 2.1.1 by the way (running a bit behind on upgrades...).
It seems you're always talking https to the outside world, so your cleanest option is to solve the problem where it originates, at your Apache webserver. Add to httpd.conf Header edit Location ^http://(.*)$ https://$1, and you're done.
If you have limitations that force you to solve this in your application you could do the rewrite of the Location header field in a Grails after interceptor. Details of that solution are in this post.
Some years have past since this question was written, but problems remain the same or at least similar ;-)
Just in case anyone hits the same/similar issue (that Grails redirect-URLs are http instead of https) ... We had this issue with a Grails 3.3.9 application running on OpenShift. The application was running in HTTP mode (on Port 8080) and the OpenShift Loadbalancer was doing the SSL-Termination.
We solved it by putting
server:
use-forward-headers: true
into our application.yml. After that Grails works perfect and all the redirects created were correct with https://.
Hint: We have do not use grails.serverURL in our configuration