SVN: remote access isn't working - apache

I've installed subversion and apache on my pc. I can access to my repository using followinf url
http://localhost/svn/repos/
Now I want other members of my group to access the project files I've put in my repository. As it's my first time using svn I looked for the solutions and I think I'm a bit lost.
I read about port forwarding in my router so I opened my router interface. I went to NAT/PAT section of my router configuration and added a new rule with following caracteristics:
Application: svn
External port:3690
Internal port:80
protocol : TCP
equipment: myPC
And Checked the option "Active". But I think I'm missing something.
I read in an article that to verify if the remote access is working i have to go to
svn://83.200.108.71
While it doesn't work. "unable to connect.."
Can someone please help me .

Wait... You can access your repository via http://? Why not let others access the repository using http://?
Don't do anything with your router. Don't muck with ports. Apache httpd is serving your repository just fine off of Port 80. Tell your users to simply access your repository via http://<machineName>/svn/repos. That's all there is to it.
svn:// is a completely different protocol than http://. Port 3690 just happens to be the default port of svn://, but that doesn't mean if you reroute your http:// protocol there, everything will work.
Most of the time, people who first use Subversion set up the svnserve server instead of Apache httpd because it's easier than using Apache http. Here's how you setup a repository to use svn://:
$ svnadmin create my_repos #
$ vi my_repos/conf/svnserve.conf #Need to denop 'password-db=passwd' line
$ vi my_repos/conf/passwd #Need to setup user accounts
$ svnserve -r my_repos -d
And that's it. Now your users can access the repository via svn://<machineName>.
Although svnserve is simpler and easier than Apache (and faster), there are many reasons to use Apache httpd over svnserve:
Port 80 is likely not blocked by network while port 3690 maybe blocked
You can let Apache httpd use LDAP for authentication (which can also allow Windows Active Directory authentication)
Apache httpd can service multiple repositories while svnserve can only service a single repository on port 3690.

Related

Force php-curl to resolve with /etc/hosts configuration

i need to run an Nextcloud instance over an IPv6 Only Server.
(Web access will be accessable over IPv4 and IPv6 over an nginx reverse proxy).
Problem:
If i try to install an app to nextcloud, it tryt to curl it from https://github.com which is (what a shame today) not ipv6 compatible.
Error message:
cURL error 7: (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://github.com/nextcloud-releases/calendar/releases/download/v4.2.0/calendar-v4.2.0.tar.gz
Setup:
Debian 11.6
Apache/2.4.54
PHP 8.1.13 (cli)
First approach:
I found public proxy services for github, that proxy the requests through an nginx, through edited entrys in the /etc/hosts file.
2a01:4f8:c010:d56::2 github.com 2a01:4f8:c010:d56::3 api.github.com 2a01:4f8:c010:d56::4 codeload.github.com 2a01:4f8:c010:d56::5 objects.githubusercontent.com
That solution works for me on console, but not for the nextcloud application.
It seems php-curl ignores the /etc/hosts file.
I also found "CURLOPT_RESOLVE" but it seems that something i need to put in the nextcloud code?
Is there anything similar i can put in the php.ini or something like that?
Question:
Is there any option to force php-curl to use the /etc/hosts file without touching the code of nextcloud?

Remote access of network computer

I am not sure how or if this can be done. I have a home network and would like to see a computer,not the server, via a remote location. I have Apache on my server. Example: the network computers I would like to see ip 152.254.1.33. Is there a way to add this ip to Apache root directory? I have tried to add a shortcut with in the root directory and it only works on the home network, will not via remote connection.
I need some clarification here on what you are trying to acomplish, are you trying to access the Apache website outside of the local network?
If that is the case, Apache is automatically set to listen on all network interfaces, you can check this in your virtual host configuration in the sites-enabled directory of your apache installation.
You should see something like in the 000-default.conf
You can test if apache is serving pages up correctly using the command
curl 127.0.0.1
You should see the HTML of the page being served.
If this is the case, then it's likely the firewall on your machine/router or your ISP is blocking the required ports. You can allow Apache through the firewall on Ubuntu using sudo ufw allow Apache Full
If you give me some more info in comments we can probably work this out.

Apache HTTP ProxyPass inside Docker container linked to other containers: Wrong remote IP interpreted by linked container

I am migrating an Apache configuration from plain host-based Ubuntu to container-based CoreOS. I have only one instance of CoreOS for exploratory purposes and personal use, so I don't really need a cloud infrastructure compatible solution for this task. Assume all containers are running on the same physical machine.
That Apache configuration was a virtual host ProxyPass with ProxyPreserveHost On. On Ubuntu with an Apache installed on the host machine and no Docker, all is well. The objective is to host multiple web services on the same machine, with each web service being on its own subdomain, on port 443.
For instance, I currently have on my CoreOS installation:
example.com (website)
gitlab.example.com (gitlab)
jenkins.example.com (jenkins)
sonar.example.com (sonar)
monitoring.app.example.com (python)
event.api.example.com (java)
legacy.api.example.com (php)
Every one of these web services are running in separate containers, and their ports are NOT published (not accessible from the Internet). As for Apache, it's running on its own container, and its ports are exposed.
I am using container linking to achieve the virtual hosts to ProxyPass behavior: --link gitlab:gitlab \ and ProxyPass / https://gitlab:443/
I am now facing a problem: If I watch the Apache Logs, I can see incoming connections are logging with the expected client IP address. However, the recorded incoming connections seen by the target containers are a container's IP address i.e. 172.17.0.1.
Due to the diversity of the target container web services (gitlab, python, java, php...), I am NOT able to tweak the implementation of these web services so that they pick the IP from another location let's say X-Forwarded-For.
What would be a way to make it so the target containers see the desired IP address they would have seen if they weren't running in Docker? I am open to solutions that involve throwing away Apache HTTP as long as the desired use case is accomplished (port 443 exposed to Internet: one domain -> one webservice, client IP preserved).
Please note that I was not able to use --net=host on the Apache server, because this option is incompatible with container links.
Links are legacy technology which is being phased out, but you are right, container sharing host network cannot be connected to any other network type.
# docker network connect bridge container
Error response from daemon: Container sharing network namespace with another container or host cannot be connected to any other network
Use pipework to connect your apache to the outside network. Put apache and all other containers in the bridge network to provide internal connectivity.
Keep an eye on macvlan driver which you should use instead of pipework once it comes out of "experimental" build.

How to configure custom hostname to IP resolutions in my system for web development

Preface
A web app can potentially
serve different pages,
depending on the
hostname
that is requested by the browser,
even if all hostnames are resolved
to the same
IP address.
Example
For example, at
https://app.example.com,
which resolves to
1.2.3.4,
users will find the user interface
and at
https://admin.example.com,
which also resolves to
1.2.3.4,
awaits a dashboard
through which
only the app's owner can
administrate users and data
in the app.
What We Need
In short,
we need to enter,
for example,
http://admin.app:8000/
in our browsers
and have that admin.app resolve to 127.0.0.1.
The Question
How can I configure
custom hostname to IP address resolutions
in my development environment?
(Ubuntu and Derivatives) Configure NetworkManager's dnsmasq
Ubuntu Desktop's default networking configuration is
composed of NetworkManager and its slave dnsmasq.
The slave dnsmasq listens at 127.0.1.1 and /etc/resolv.conf lists it as the only nameserver.
This has
some benefits.
What it means for this purpose is that we have a fully configurable DNS server, comfortably configured by default.
We can create
/etc/NetworkManager/dnsmasq.d/hosts.conf
and put in it whatever address statements we'd like:
address=/admin.app/127.0.0.1
We can even use wildcards!
address=/.app/127.0.0.1
See the
dnsmasq documentation
for details
(look for --address).
Since dnsmasq is started by the network-manager service,
then I would assume that the following would restart it
so that new configuration would take effect:
$ service network-manager restart
But its init-script does not control slave dnsmasq.
Therefore the dnsmasq process must be killed and then
the above command would have it start again.
And that is it!
(Linux) User Specific HOSTALIASES File
Very limited
This would have been my preferred answer
because
it refrains from
altering system configuration.
But:
It does not support wildcards
It does not support hostname to IP address resolution
It does not support freely configurable subdomains
It will not work if you have a local DNS server,
which is the case in modern Ubuntu.
What is It
It is a user specific host aliases file.
Notice that the format is not the same as the hosts file.
In short, you create a file
which contains host aliases.
For example
foo localhost
bar localhost
and place it at ~/.hosts.
Then you set an environment variable
HOSTALIASES
with the path to the aliases file.
So, for this example
$ export HOSTALIASES=~/.hosts
If Testing In a Virtual Machine
In a virtual machine
127.0.0.1 and localhost
will not reach the host,
but the guest.
In VirtualBox, for example,
by default, the host can be reached
at 10.0.2.2.
So, the guest VM's hosts file can look like
10.0.2.2 host
10.0.2.2 app.host
10.0.2.2 admin.host
Proxy DNS Nameserver Inside a Virtual Machine
If you're setting up
a proxy DNS nameserver
inside a virtual machine
(perhaps for wildcard support in Windows)
the upstream nameserver
is usually provided by the host.
In VirtualBox, it is 10.0.2.3.
(Windows) Configuring Acrylic DNS Server
Acrylic DNS Proxy is easy to install and configure.
It can help us get hostnames with aliases quickly in Windows.
And it is open source.
Install it.
Open the hosts file (via the start menu entry).
Put in some entries, like 1.2.3.4 >app.
Clear its cache and restart it (via the start menu entry).
Set your DNS server to 127.0.0.1.
(Windows, Linux, OSX) System Wide Hosts File
Simply edit the
hosts file.
Its location
depends on the OS.
For example:
127.0.0.1 app.localhost
127.0.0.1 admin.localhost
On Windows you can use
this nifty open source GUI
for editing the hosts file:
Hosts File Editor.
Wildcards
The hosts file does not support wildcards!
Ubuntu Desktop
Since Ubuntu 12.04,
Ubuntu desktop comes with
a local DNS server,
which might not respect
the hosts file (/etc/hosts).
So, for Ubuntu desktop, this answer
is best.
(GNU/Linux)
Since all the major distributions are migrating (or already did) to systemd stack the proper place to implement wildcard support would be systemd-resolved: see https://github.com/systemd/systemd/issues/766 for details.
That would be the place to set custom overrides for DNS as well.
As for windows - its VM should just get DNS from host machine: it's to risky to run it on bare metal anyway.

DNS problem - dig resolves but curl cannot connect to host

I have recently created a Rackspace cloud server instance using CentOS 5.5. I have used yum to install the "Web Server" group (it includes Apache, etc.), added www.booztrakr.com as the ServerName in httpd.conf, made sure iptables allows on port 80. I had registered this domain with Go-Daddy and changed their name servers to the Rackspace name servers on their site. I added "A" and CNAME records to the Rackspace name servers. httpd has been started. When I use curl on the server I can get the Apache landing page. When I dig www.booztrakr.com from a remote machine(over the internet) the answer section returns:
www.booztrakr.com. 300 IN CNAME booztrakr.com.
booztrakr.com. 300 IN A 184.106.216.156
When I try a browser or curl, it can't connect:
curl -G www.booztrakr.com
curl: (7) couldn't connect to host
I know this has got to be pretty basic and config related but I'll be dammed if I can see it. Any help would be appreciated. Thanks.
If dig resolves, this just means the DNS server returns the right values. It will even work if the IP doesn't exists.
If a HTTP connecting to the server fails, this is a configuration problem.
The server responds to ICMP requests, so it's not a routing problem.
When I use curl on the server I can get the Apache landing page
Your webserver is running, but you just can't reach it from outside. This is the problem. What does iptables --list outputs?