i need to run an Nextcloud instance over an IPv6 Only Server.
(Web access will be accessable over IPv4 and IPv6 over an nginx reverse proxy).
Problem:
If i try to install an app to nextcloud, it tryt to curl it from https://github.com which is (what a shame today) not ipv6 compatible.
Error message:
cURL error 7: (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://github.com/nextcloud-releases/calendar/releases/download/v4.2.0/calendar-v4.2.0.tar.gz
Setup:
Debian 11.6
Apache/2.4.54
PHP 8.1.13 (cli)
First approach:
I found public proxy services for github, that proxy the requests through an nginx, through edited entrys in the /etc/hosts file.
2a01:4f8:c010:d56::2 github.com 2a01:4f8:c010:d56::3 api.github.com 2a01:4f8:c010:d56::4 codeload.github.com 2a01:4f8:c010:d56::5 objects.githubusercontent.com
That solution works for me on console, but not for the nextcloud application.
It seems php-curl ignores the /etc/hosts file.
I also found "CURLOPT_RESOLVE" but it seems that something i need to put in the nextcloud code?
Is there anything similar i can put in the php.ini or something like that?
Question:
Is there any option to force php-curl to use the /etc/hosts file without touching the code of nextcloud?
Related
I am not sure how or if this can be done. I have a home network and would like to see a computer,not the server, via a remote location. I have Apache on my server. Example: the network computers I would like to see ip 152.254.1.33. Is there a way to add this ip to Apache root directory? I have tried to add a shortcut with in the root directory and it only works on the home network, will not via remote connection.
I need some clarification here on what you are trying to acomplish, are you trying to access the Apache website outside of the local network?
If that is the case, Apache is automatically set to listen on all network interfaces, you can check this in your virtual host configuration in the sites-enabled directory of your apache installation.
You should see something like in the 000-default.conf
You can test if apache is serving pages up correctly using the command
curl 127.0.0.1
You should see the HTML of the page being served.
If this is the case, then it's likely the firewall on your machine/router or your ISP is blocking the required ports. You can allow Apache through the firewall on Ubuntu using sudo ufw allow Apache Full
If you give me some more info in comments we can probably work this out.
I have a new server running CentOS, and it has httpd running on 192.168.1.100:80.
I can connect to my server through ssh on 192.168.1.100, but when I go to 192.168.1.100 in my browser, it says "Oops! Google Chrome could not connect to 192.168.1.100".
I also tried wget to see if that works, and here is where it gets interesting.
when I run:
wget 192.168.1.100
On my server it gets the index.html file as it should.
but when I run it on my laptop is says "Connecting to 192.168.1.100:80... failed: No route to host."
Does anyone know how to fix this?
Seems like your Apache configuration binds httpd to 192.168.1.100:80
Find line Listen 192.168.1.100:80 in Apache main configuration - something like /etc/httpd/httpd.conf or /etc/apache2/httpd.conf and change this line to Listen 0.0.0.0:80.
Restart Apache and it will probably work.
I've been able to rather easily get facebooks hhvm working from prebuilt debian packages as well as compile it, and afterwards to run it behind apache as a proxy. The problem with the proxy setup is though, that I can't get response headers other than http status code 200 - like 304 for example - through. It's not the proxy config of apache, but something wrt hhvm and apache interact, or even in hhvm.
Anyway, HHVM officially stopped supporting the standalone server, and they're moving over to fastcgi, and as all of our servers are running Debian, I don't have access to mod_proxy_fastcgi without compiling it (the only backports I found of apache 2.4 don't have mod_proxy_fastcgi backported unfortunately).
So I'm currently trying to get HHVM to run behind the old mod_fastcgi with apache 2.2. But currently I'm only getting "connect() failed" in the error log of apache, while hhvm is listening on :::1080
The important part of my apache config is
RemoveHandler application/x-httpd-php
FastCgiExternalServer /home/www/hhvm/hostname/htdocs/php5.fcgi -flush -host ip6-localhost:1080
AddType application/x-hhtpd-fastphp5 .php
Action application/x-httpd-fastphp5 /php5.fcgi
Alias /php5.fcgi /home/www/hhvm/hostname/htdocs/php5.fcgi
netstat also lists hhvm as listening on :::1080 and I can connect to it via telnet
Any Ideas on what I need to change so it works?
Looks like a IP6 port problem. Try \[ip6-localhost\]:1080 . Not sure if this has side effects in Apache.
How do ports work with IPv6?
I've installed subversion and apache on my pc. I can access to my repository using followinf url
http://localhost/svn/repos/
Now I want other members of my group to access the project files I've put in my repository. As it's my first time using svn I looked for the solutions and I think I'm a bit lost.
I read about port forwarding in my router so I opened my router interface. I went to NAT/PAT section of my router configuration and added a new rule with following caracteristics:
Application: svn
External port:3690
Internal port:80
protocol : TCP
equipment: myPC
And Checked the option "Active". But I think I'm missing something.
I read in an article that to verify if the remote access is working i have to go to
svn://83.200.108.71
While it doesn't work. "unable to connect.."
Can someone please help me .
Wait... You can access your repository via http://? Why not let others access the repository using http://?
Don't do anything with your router. Don't muck with ports. Apache httpd is serving your repository just fine off of Port 80. Tell your users to simply access your repository via http://<machineName>/svn/repos. That's all there is to it.
svn:// is a completely different protocol than http://. Port 3690 just happens to be the default port of svn://, but that doesn't mean if you reroute your http:// protocol there, everything will work.
Most of the time, people who first use Subversion set up the svnserve server instead of Apache httpd because it's easier than using Apache http. Here's how you setup a repository to use svn://:
$ svnadmin create my_repos #
$ vi my_repos/conf/svnserve.conf #Need to denop 'password-db=passwd' line
$ vi my_repos/conf/passwd #Need to setup user accounts
$ svnserve -r my_repos -d
And that's it. Now your users can access the repository via svn://<machineName>.
Although svnserve is simpler and easier than Apache (and faster), there are many reasons to use Apache httpd over svnserve:
Port 80 is likely not blocked by network while port 3690 maybe blocked
You can let Apache httpd use LDAP for authentication (which can also allow Windows Active Directory authentication)
Apache httpd can service multiple repositories while svnserve can only service a single repository on port 3690.
I have recently created a Rackspace cloud server instance using CentOS 5.5. I have used yum to install the "Web Server" group (it includes Apache, etc.), added www.booztrakr.com as the ServerName in httpd.conf, made sure iptables allows on port 80. I had registered this domain with Go-Daddy and changed their name servers to the Rackspace name servers on their site. I added "A" and CNAME records to the Rackspace name servers. httpd has been started. When I use curl on the server I can get the Apache landing page. When I dig www.booztrakr.com from a remote machine(over the internet) the answer section returns:
www.booztrakr.com. 300 IN CNAME booztrakr.com.
booztrakr.com. 300 IN A 184.106.216.156
When I try a browser or curl, it can't connect:
curl -G www.booztrakr.com
curl: (7) couldn't connect to host
I know this has got to be pretty basic and config related but I'll be dammed if I can see it. Any help would be appreciated. Thanks.
If dig resolves, this just means the DNS server returns the right values. It will even work if the IP doesn't exists.
If a HTTP connecting to the server fails, this is a configuration problem.
The server responds to ICMP requests, so it's not a routing problem.
When I use curl on the server I can get the Apache landing page
Your webserver is running, but you just can't reach it from outside. This is the problem. What does iptables --list outputs?