Cannot bind arango 2.8.5 to to endpoint ssl://0.0.0.0:443 - ssl

I am using arangodb 2.8.5 on ubuntu 14.04 (64bit)
In config file, endpoint = ssl://0.0.0.0:443
fails to start with error msg in log "FATAL failed
to bind to endpoint 'ssl://0.0.0.0:443'. Please check whether another
instance is already running or review your endpoints configuration."
Ran netstat -lnpt. Only port 22 is in use by ssh
Server starts up and binds to port 8530 with ssl when using endpoint = ssl://0.0.0.0:8530. Admin website is accessible https://www.website.com:8530/.../
I want the admin ui to be accessible without the need for additional port 8530 i.e. https://www.website.com/. This was possible to set up in the earlier versions. What am i doing wrong or is this not possible anymore?
Small application so i am trying to avoid running another web server in front to forward requests to arango apps. Thank you very much for any direction.
Regards,
Anjan

The problem occurs in conjunction with ArangoDB dropping its root privileges to the specified user by
[server]
endpoint = ssl://0.0.0.0:443
uid=arangodb
This may become possible with ArangoDB 3.0 again, however currently you have to choose one of the workarounds to allow non-root processes to bind lower ports:
authbind
Using the iptables REDIRECT target to redirect a low port to a high port (the "nat" table is not yet implemented for ip6tables, the IPv6 version of iptables)
SELinux or AppArmor
Use the capabilities system available as of Linux kernel 2.6.24 and CAP_NET_BIND_SERVICE capability:
setcap 'cap_net_bind_service=+ep' /usr/sbin/arangod
And then anytime ArangoDB is executed thereafter it will get the CAP_NET_BIND_SERVICE capability. setcap is in the debian package libcap2-bin.
More details on the capabilities can be found at:
capabilities(7) man page. Read this long and hard if you're going to use capabilities in a production environment. There are some really tricky details of how capabilities are inherited across exec() calls that are detailed here.
setcap man page
"Bind ports below 1024 without root on GNU/Linux"

Related

using cloudflared to do ssh tunneling accesible by the interenet without need to run cloudflared on the otherside

I have a raspi machine behind NAT in my room, and I want to access it from the interenet using the URL.I found this article.
https://developers.cloudflare.com/cloudflare-one/tutorials/ssh
However, it required me to run the cloudflared program on the connecting client. I understand that this is for the security purpose. Does it possible to make the connect without running the cloudflared program on the client machine.
A follow-up question would be is it possible to ssh into ipv6 machine that using the same technique.
There are various options when it comes to connecting to a machine running on a private network:
Running cloudflared on the client (which you already found)
Installing the WARP client on the user side, then using cloudflared on the server side to expose the service securely. Finally, route the network traffic for the private network on the tunnel via WARP. This approach is described in a tutorial here
Cloudflare started also supporting in browser rendering of an SSH session. I have wrote a tutorial describing how to set it up here.
Approach (3) would do away with the need of running a client since it relies on a simple browser.

Tomcat clustering on window machine

Hi I am setting up a Tomcat cluster on windows machine with Apache httpd server.
I am done with two steps - load balancing and session affinity and now i am at 3rd step.Session Replication.In that step I need to add multicast route first.
In tutorial command of adding route in linux environment is given.
sudo route add -net224.0.0.0 netmask 240.0.0.0 dev eth0
here eth0 is device.
But,I am not getting how to add this in Windows 10. I am finding it out and trying on my machine. Any help would be appreciated.
I have set up the tomcat cluster on window machine using Tomcat 8.For Windows environment no need to run above command.It is working fine absolutely without this command.In almost all blogs present over web,use linux and given above command to setup route but in windows it is not required.But still question one question is pending why it is not require ?

Setting up ubuntu VM on Azure with apache

In Azure, I created a virtual network and then associated an Ubuntu Server virtual machine, created with Azure Resource Manager Deployment method, with the network. I then updated the associated Network Security Group and added an inbound security rule for port 80 (Source:Any, Destination:Any, Service:TCP/80). After installing Apache on the VM, I tried to access the server from my browser, but have run into a wall. I can SSH into the VM just fine, but web is a no-go, and I cannot figure out why. Any help would be appreciated.
It sometimes happen to me too because I forgot to RESTART the VM, yes just restart it. At least this works for me. and also dont forget to add outbound rule too
It worked for me with this inbound rule.
Note that when a VM is created from the portal (in ARM model), it gets automatically associated to a virtual network (vnet), a specific subnet within the vnet and a network security group.
When creating the inbound security rule, make sure to:
identify the correct network security group associated to the VM
use a priority number lower than 65500
set the source port range as *
You also need open port 80 on the VM to allow web access.
I dont think that creating your Network Security Group opens the desired port on the VM automatically.
By default in Azure Resource Manager (ARM), all ports are open; there is no need to make Network Security Groups (NSGs) to open ports, only to close them. Here is an example of an ARM template that deploys an ubuntu VM with apache:
https://github.com/Azure/azure-quickstart-templates/tree/master/apache2-on-ubuntu-vm
Alternatively, if you want an auto-scaling LAP stack using VM Scale Sets (in public preview), you can find the ARM template for that here:
https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-lapstack-autoscale
Hope this helps! :)

Remote Docker Host Authentication

Hi I'm currently working on a side project. In this project I'll have a central server that will need to connect to several remote docker daemons. My problem is with authentication.
Given that the project will be hosted on Digitalocean, my first thought suggested that I'll accept only connections from the private networking interface. The problem is that that interface is accessible by all other servers in the same datacenter.
Second thought is to allow only requests from the central server using the DOCKER_HOST config, the problem is that if I understand correctly the if the private IP of the centeral server get known, the IP can be spoofed.
Third thought is to enable TLS ( https://docs.docker.com/articles/https/ ), I've never dealt with those things before and the tutorial is unclear for me, I lack the knowledge of the terminologies and it's being used heavily.
So basically the problem is that I have a central client and multiple remote docker hosts, what is the best way to connect to them? Thank you.
EDIT: I managed to solve the problem using HTTP authentication by running nginx as a proxy in front of the docker daemon.
My understand is you are trying to build a docker cluster, which can manage all nodes from one single central server.
this is very likely docker's Docker Swarm project, from their doc, they give some simple idea how this is work:
open a TCP port on each node for communication with the swarm manager
install Docker on each node
create and manage TLS certificates to secure your swarm
Sorry this should post as a comment but I do not have enough rep to do that.

Hosting site using xampp server from local network without port-forwarding

I want to make my site available world wide. Im using xampp server for hosting. I have no access to any kind of servers and modems. Situation is shown below:
My site server has local ip assigned by wifi router and it runs Windows 8.
Remember I have no access on any kind of servers and modems so port port-forwarding is impossible (out of my scope).
Its actually difficult, but not impossible.
One way, I would approach this is:
I would host a page on internet.
Then take request and store it in database.
One of my program will always be running from my computer.
Then check for request and curl the request to localhost. For this you may use Node.js (taking data from database using GET method and curl it to localhost).
This is the best I could think of. And I am working on it, when the code is ready I'll make it open source and notify you :)
But still, it's difficult, as you need to put user's request to sleep for 2 seconds and then transferring it.
Its slow, but may work out for you.
Disadvantages:
Program will be very slow and memory usage will be more.
Breaking may happen many times.
High bandwidth wastage
If not encrypted, MIM (Men in Middle) may possible.
Advantages:
Indirect method of hosting
Need not to worry about your code being lost.
I am looking forward for a better alternative and I would like to keep this question for bounty once again.
If you cannot open the necessary ports within your LAN you will require access to an external server. However, the external server does not need to host any code, e.g.
Create a Linux based ec2 instance using Amazon's free tier.
Install a package to redirect remote to local ports:
a. using socat:
Install socat using your distributions package manager
Connect via SSH: ssh -N -R 42500:127.0.0.1:80 -o ServerAliveInterval=60 ubuntu#xxx.xxx.xxx.xxx -N -R 8080:localhost:80 "socat TCP-LISTEN:8080,fork TCP:127.0.0.1:42500"
b. using a webserver and reverse proxy:
Install apache or nginx and any required reverse proxy modules and configure your VirtualHost to proxy requests to a local port, e.g. :8080 -> 127.0.0.1:42500
Connect via SSH: ssh -N -R 42500:127.0.0.1:80 -o ServerAliveInterval=60 ubuntu#xxx.xxx.xxx.xxx
Your machine is now reachable via the ec2 instance http://xxx.xxx.xxx.xxx:8080/.
I occasionally use this technique when debugging web service callbacks.
Update 17-02-2014
If you are a Windows user you will need to install a third-party tool to support ssh. Options include:
cygwin
git bash
PuTTY
PuTTY is the easiest choice if you are not familiar with *nix tools. To configure remote port forwarding in PuTTY expand the following setting: Connection -> SSH -> Tunnels. Given the previously described scenario, populate Source port as 42500, Desination as 127.0.0.1:80 and tick the Remote option. (You may also need to add the path to a PuTTY compatible private key in the Connection -> SSH -> Auth tab depending on your server configuration.
To test you have successfully forwarded a port, execute the command netstat -lnt on your server. You will see output similar to:
tcp 0 0 127.0.0.1:42500 0.0.0.0:* LISTEN
Finally you can test with curl http://127.0.0.1:42500. You will see the output of your own machines web root running on port 80.
if you don't have a public IP address and cannot use port forwarding it is impossible to host the site
As people have said you need a public IP address. However, even if you did you should not use xampp as a public server, as it is designed for development and therefore has some security settings disabled.
I would recommend buying some shared web hosting, and uploading it to that. (you can get cheap hosting if you google 'shared web hosting', plus free .tk domains are avaliable: http://www.dot.tk/)
Do your company has any vpn network?
If it does and you have access to the vpn network, you can include your server to the vpn network and your guest will only need to login to your company vpn network then access your site like in a local network without using port forwarding. And since your data is very confidential, I assume that using vpn will also help to increase the security of your data.
Please correct me if I'm wrong.
Thank You.
What you are asking is not possible without port forwarding.
Lets break it into steps.
To host your site locally you will need a IP that is static so that
users can access it specifically.
You will need a domain so that it can be converted into user friendly name.
A 24x7 Internet Connection is must! You added a Wifi Router in your Diagram and most of today's router are capable of port forwarding.
What i will do in your scenario is:
Instead of using XAMP, i will install WAMP because i am more familiar with it and easy to configure.(totally personal preference)
Then i would set my server "ONLINE".(Google how to set WAMP server online)
Forward port "80" from router settings to my local computer ip address.(mostly it is tagged as "Virtual Server","Firewall","Port Forwarding",etc vary router to router in settings)
Suppose you have a local ip "192.168.1.3" and global/router IP "254.232.123.232" then you would redirect all the HTTP request done towards router to your local IP.
[[[[254.232.123.232]]]] --+ :80 +-- --------->192.168.1.3
That is good for now, but then you will need to tackle dynamic IP problem of router. But don't worry, thanks to some free sites that will be easy!
Go to no-ip.org -> Setup Account -> and create a entry, just a subdomain for now to test whether everything is working fine.(subdomain like mysite.no-ip.org, later purchase a real Domain)
Input your IP address there(Router IP) and download its application which will automatically update their server if your local IP changes.
Wait for some minutes and Voila! Your site is live.