How do you work out the IIS Virtual Path for an application? - iis-6

When I try to change the ASP.NET version to v4 on IIS 6, I receive the following warning:
Changing the Framework version requires a restart of the W3SVC service. Alternatively, you can change the Framework version without restarting the W3SVC service by running: aspnet_regiis.exe -norestart -s IIS-Viirtual-Path
Do you want to continue (this will change the Framework version and restart the W3SVC service)?
How do I work out IIS-Virtual-Path?
I have tried the obvious paths i.e.:
aspnet_regiis.exe -norestart -s "/WebSites/Extranet/AppName"
Where WebSites is the name of the folder in IIS, Extranet the name of the root app and AppName the name of the Virtual Directory application I am trying to change.
Thanks!
Edit:
How do I work out the virtual path for the Auth virtual directory in following IIS6 setup:
(source: imgbag.com)
I have tried:
aspnet_regiis.exe -norestart -s "/Web Sites/Extranet/Auth"
aspnet_regiis.exe -norestart -s "Auth"
I get:
Installation stopped because the specified path (WhateverIPutIn) is invalid.

I solved it. I had to use:
aspnet_regiis -lk to get a list of the folders in "IIS" format
Then I do something like:
aspnet_regiis.exe -norestart -s "W3SVC/1234567/root/AppName"

My problem running aspnet_regiis -lk was that I got an incomplete list of IDs and also I didn't know which ID corresponded to the Website i wanted to work on.
An easier way to find the IDs for your websites is by clicking on the "Website" node (folder) in IIS as in this picture. On the right side you should see a list of all websites with their "Identifier"s, State, IPs and ports.

Here's a good summary
W3SVC/ + [Site Identifier from IIS Console] + /root
for example W3SVC/1234567/root
To find the Identifier
Click on the Websites node (folder) in IIS.
On the right hand side is a list of all websites with their Identifiers, State, IPs and ports.
Now all together
aspnet_regiis.exe -norestart -s "W3SVC/1234567/root"
Finally
Add of the virtual directory to the end W3SVC/1234567/root/APPNAME if you need to

I think your need to use a path starting with /W3SVC. Maybe this article can help you further.

To change Framework version without restarting W3SVC:
Run aspnet_regiis.exe -norestart -s IIS-Virtual-Path
aspnet_regiis.exe should be run from %SystemRoot%\Microsoft.NET\Framework(required dotnet version)
eg C:\WindowsMicrosoft.NET\Framework\v4.0.30319
IIS-Virtual-Path is: W3SVC/(WebsiteID)/root[/AppName] Where
(WebsiteID) is the identifier as listed in IIS (see Diego C's image above) and
[/appname] is an optional virtual directory below your website. (eg W3SVC/1234567890/root/dotnetnuke)
Open a command prompt
Navigate (CD) to C:\WindowsMicrosoft.NET\Framework\v4.0.30319
Execute aspnet_regiis.exe -norestart -s “W3SVC/1234567890/root/dotnetnuke”

I was able to follow the advice according to joshcomley's post here, but had to get the name of the virtual path from the generated XML file.
You can use IIS's export site config to a file (xml file). Inside, there are a few tags that look like this:
<IIsWebVirtualDir Location ="/LM/W3SVC/2070355274/root"
Just pick the first one that ending with "root".
that worked great.
(tried, but cant post image here)

Related

WebSharper ui.next site working locally but not in docker

I have a site that's build using mono and WebSharper.UI.Next. It's selfhosted (Owin) and works flawlessly on my machine directly. However when I try executing it from within a docker container (FROM mono:3.10-onbuild) requesting one of the WebSHarper script files "dissappear" E.g. WebSharper.Collections.min.js returns a 404.
This behaviour can be reproduced with the project created by WebSHarpers client-server selfhosted Owin project template and the below dockerfile
FROM mono:3.10-onbuild
RUN ln -s /usr/src/app/build /usr/src/app/bin
CMD mono ./Site.exe http://*:9000
EXPOSE 9000
(Site should obviously match the name of the site being used)
It turns out that the trick is to override the default root directory when running in the docker container. changing the CMD from OP to
CMD mono ./Site.exe _PublishedWebsites/Site/ http://*:9000
Will override the default home dir then I get all the scripts alright.

Create local debian repository

My goal is to demonstrate creating a local debian repository with controlled versions of tools used (e.g. compiler versions) to make a build system more predictable.
I've tried to follow this example: http://linuxconfig.org/easy-way-to-create-a-debian-package-and-local-package-repository
but when I get to the apt-get update stage, I always get a 404 not found on the repository I've added.
The apache2 server is running, I can view the default page installed at http://localhost/html/index.html.
I am trying this with the file fortune-mod_1%3a1.99.1-7_amd64.deb installed to /var/www/debs. I create the Packages.gz file as the tutorial suggests:
dpkg-scanpackages debs /dev/null | gzip -9c > debs/Packages.gz
I also add a new file: /etc/apt/sources.list.d/myppa.list with this line:
deb http://localhost debs/
I restart the apache2 service just in case:
sudo service apache2 restart
but running:
sudo apt-get update
still produces this error:
W: Failed to fetch http://localhost/debs/Packages 404 Not Found
Is there something basic I'm missing? Ultimately, I'd like to get this working over a LAN, but first have to get it working on a single machine.
EDIT: I'm doing this on Ubuntu 14.04.
EDIT: Show contents of file /etc/apt/sources.list.d/myppa.list
tldr; use aptly
It's the easiest apt repository management tool I've found and it comes with neat tutorial showing how to create, populate, and publish your own apt repository.
References:
https://www.aptly.info/
https://www.aptly.info/tutorial/repo/
I ended up solving the problem. It was an issue with the default document root being different for the tutorial than on my system. All I did was move my debs folder to html (document root turns out to be /var/www/html, not just /var/www on my install). That did the trick.

Changing permissions of added file to a Docker volume

In the Docker best practices guide it states:
You are strongly encouraged to use VOLUME for any mutable and/or user-serviceable parts of your image.
And by looking at the source code for e.g. the cpuguy83/nagios image this can clearly be seen done, as everything from nagios to apache config directories are made available as volumes.
However, looking at the same image the apache service (and cgi-scripts for nagios) are run as the nagios user by default. So now I'm in a pickle, as I can't seem to figure how to add my own config files in order to e.g. define more hosts for nagios monitoring. I've tried:
FROM cpuguy83/nagios
ADD my_custom_config.cfg /opt/nagios/etc/conf.d/
RUN chown nagios: /opt/nagios/etc/conf.d/my_custom_config.cfg
CMD ["/opt/local/bin/start_nagios"]
I build as normal, and try to run it with docker run -d -p 8000:80 <image_hash>, however I get the following error:
Error: Cannot open config file '/opt/nagios/etc/conf.d/my_custom_config.cfg' for reading: Permission denied
And sure enough, the permissions in the folder looks like (whist the apache process runs as nagios):
# ls -l /opt/nagios/etc/conf.d/
-rw-rw---- 1 root root 861 Jan 5 13:43 my_custom_config.cfg
Now, this has been answered before (why doesn't chown work in Dockerfile), but no proper solution other than "change the original Dockerfile" has been proposed.
To be honest, I think there's some core concept here I haven't grasped (as I can't see the point of declaring config directories as VOLUME nor running services as anything other than root) - so provided a Dockerfile as above (which follows Docker best practices by adding multiple volumes) is the solution/problem:
To change NAGIOS_USER/APACHE_RUN_USER to 'root' and run everything as root?
To remove the VOLUME declarations in the Dockerfile for nagios?
Other approaches?
How would you extend the nagios dockerfile above with your own config file?
Since you are adding your own my_custom_config.cfg file directly into the container at build time just change the permissions of the my_custom_config.cfg file on your host machine and then build your image using docker build. The host machine permissions are copied into the container image.

Can't log in into trac

So I am trying to use trac as standalone bugtracker. I've generated user and password using script on this page. digest.txt file is in ~/.foo-trac/conf/ directory. Source looks like this:
montreal:FOO:904fa5b01944434358e48467fbf5203c
Running this command:
tracd -p 8000 --auth="foof,.foo-trac/conf/digest.txt,FOO" ~/.foo-trac/
Getting no errors but still not able to login. Strange detail is that tracd shows this line when I'm clicking log in:
127.0.0.1 - - [16/Oct/2014 03:47:53] "GET /.foo-trac/login HTTP/1.1" 500 -
What's going on?
UPD
Now I am trying to make it another way: using base auth on this page.
I've created new environment by this command: trac-admin /home/montreal/.trac initenv. In prompt I've given name Foo to my new project.
Then I've created new user by running this command: sudo htpasswd -c /home/montreal/.trac/.htpasswd username and entered a password. My .htpasswd file looks like this:
username:$apr1$bLbNsCx/$vbVXn5gn6HG.hJvvq/SaD1
Now I'm runnig trac by this command and getting the same result:
tracd -p 8000 --basic-auth="Foo,/home/montreal/.trac/.htpasswd," /home/montreal/.trac
Link says that first argument of --basic-auth should be projectdirname, but in /home/montreal/.trac no Foo directory.
It looks like I've got correct /fullpath/environmentname/.htpasswd argument.
But how can I get the realmname argument? Maybe it makes all the trick. Maybe some logs of tracd can be helpful but log folder is empty and I don't know another place to look.
I need this bloody bug-tracker.
Don't use relative paths (~/.foo-trac/) but absolute ones.
Same applies for the auth file path, that is not even relative like the path to your Trac environment but certainly wrong, because its absolute path is not /.foo-trac/conf/digest.txt, but this is what tracd is picking up from command-line as you see in the "strange" log line.
Enable Trac DEBUG logging in .foo-trac|.trac/conf/trac.ini as advised in the wiki documentation on this topic.
First argument of --basic-auth should be projectdirname, that is /home/montreal/.trac itself, commonly referred to as Trac environment directory, nothing else.

nfsnobody User Privileges

I have setup an NFS file share between two CentOS 6, 64 machines. On the server the folder being shared was originally owned by the root user. On the client it turned up as being owned by nfsnobody. When I tried to write to the folder from the client I got a permissions error. So I changed the folder ownership on the server to nfsnobody and chmod'd it to 777. However, still no joy - I continue to get a permissions error. Clearly, there is more to this. I would be much obliged to any Linux gurus out there (I personally wouldn't merit being called anything more than a newbie) who might be able to help fix this issue.
Edit - I should have mentioned that trying to write to the shared folder from the client actually manages to create a file entry. However, the file size is 0 and the permissions error is reported.
The issue here is to do with the entry in /etc/exports. It should read
folder ip(rw,**all_squash**,sync,no_subtree_check)
I had missed the all_squash bit. That apart, make sure that the folder on the server is owned by nfsnobody. On my setup both my client and server nfsnobodies ended up with a user id if 65534. However, it is well worth checking this (/etc/groups) or else... .
Here are a couple of useful references
How to setup an NFS SErver
NFS on CentOS
For the benefit of anyone looking to setup an NFS server I give below what worked for me on my CentOS 6 64bit machines.
SERVER
yum install nfs-utils nfs-utils-lib - install NFS
rpm -q nfs-utils - check the install
/etc/init.d/rpcbind start
chkconfig --levels 235 nfs on
/etc/init.d/nfs start
chkconfig --level 35 rpcbind on
With this done you should create the folder you want to share
mkdir folder
chown 65534:65534 folder
chmod 755 folder
Now define the folder to be shared/exported. Use your favorite text editor (vi or whatever) to
open/create /etc/exports
folder clientIP (rw,all_squash,sync,no_subtree_check)
Client
Install, check, bind and start as above
mount -t nfs serverIP:folder clientFolderLocation
If all goes well you should now be able to write a little script on your client
<?php
$file = $_SERVER['DOCUMENT_ROOT']."/../nfsfolder/test.txt";
file_put_contents($file,'Hello world of NFS!');
?>
browse to it and find that test.txt now exists on the server with the content "Hello world of NFS". In the example I have placed my mounted drive one level before document_root.