marklogic "getting started" app returns 404 on mac - osx-yosemite

I'm running MarkLogic 8 (developer edition) on Mac OS 10.10.1.
I'm a beginner with ML, and I'm reading the "Getting Started" material in the online docs, in particular the section "Sample XQuery Application that Runs Directly Against an App Server."
I created the "TestServer" app server just fine, following the instructions. I then copied and pasted the text for the four XQuery files in the exercise, load.xqy, dump.xqy etc.
My local copies of the four .xqy files are under ~/Library/MarkLogic/Apps/Test, per the instructions. Read and execute permissions are open along the entire filepath, down to the .xqy files themselves.
When I request http://localhost:8005/Test/load.xqy, as instructed, I get a 404 Not Found response.
lsof -i :8005 indicates that MarkLogic is indeed listening on port 8005.
I checked the TestServer configuration against the instructions, disabled and re-enabled TestServer, stopped and re-started ML--always with the same result: 404 Not Found.
I haven't been able to find anything in either the ML mail archives or Stackoverflow to get me past this sticking point.
Any ideas or suggestions would be very much appreciated. Thank you!

This seems like a permission issue. Does it work when you run it as the admin user?
Have check to make sure the files are loaded into the modules database?
Also check the permission got set with the correct role for those file.
Check to see that user that is running the app has the role that you used when setting permission on those file.

This worked for us:
In the TestServer configuration instead of just putting Test in root field, put Apps/Test/ which is the location of the 4 files (load.xqy,dump.xqy, update-form.xqy and update-write.xqy relative to the MarkLogic installation directory -- in our case, centos, this was at /opt/MarkLogic/)
And then issued this command
chmod +r *.xqy

If you follow all the instructions correctly just remove the Test from the url. If yours is "http://localhost:8005/Test/load.xqy" make it "http://localhost:8005/load.xqy"

Related

xampp apache server error 403 access forbidden on windows10

I've installed xampp in two different pc and in both of them it gives me the same error running windows10 (both with it). Apache server runs correctly, the ports are dedicated to httpd as it's supposed but when I try to access the folder of my website project to test the html files, google chrome, firefox and explorer show me a message saying: "Error 403 - access forbidden because you don't have the right permissions". I've tried everything that is in the other similar questions here and already gave permisions to all users in my folders, even to the whole drive c:. I also modified the txt files inside the folder /apache/conf/ and didn't work. I've seen that in some questions regarding the same matter, the file httpd.conf is a little bit different with parts that I can't see in mine. Thank you so much.
Finally, I solved the problem learning some more. I post it here in case someone is starting with it like me. The point is that when you install xampp, the program has a default folder where it let's you put your projects, it's called "htdocs". So that you can work only with project folders that you create inside this "htdocs" folder. If you try to create a new folder on c:/ for example, the program will not let you access beacuase of security (everybody could access all the folders on your computer, entering your server). I hope this to be of some help for someone else. Thank you, and sorry for the newbie question.

can't open fossil repo over web

I've been strugling for a couple of days with this problem, but can't seem to fix it, I think I'm almost there.... but... not quite :(
This is where I am at.
I'm on a headless debian server, running virtualmin / webmin for creating my domains / users etc. I don't know if this will mess things up, but I'm happy to modify the config files manually (via webmin or via ssh/vim).
I am attempting to run fossil as a cgi service over apache.
its an internal site, named as homeserver.net I can reach the default pages just fine, and add in and create links etc as I want to.
Please note that the solution to my problem is at the end of the question.
so the files are located on disk at, which tallys up with my apache document root
/home/homeserver/www
I would like to run fossil to have both the internal site, and later on and dev work that I practice on in separate files. So I created a new directory for these repositories.
/home/homeserver/repos/web/site.fossil
/home/homeserver/repos/dev/ [no repository yet!]
reading the instructions on the fossil page I have inserted a short cgi file called 'fos_repo.cgi' that reads as.
#!/usr/bin/fossil
directory: /home/homeserver/repos
notfound: http://www.homeserver.net/site404.htm
when I open the link to
www.homeserver.net/cgi-bin/fos_repo.cgi
I get redirected to the 404 page that I have written. So the script is clearly being read and working.
From reading the fossil pages I understand that I should be able to use the following link to open/access the repo.
www.homeserver.net/cgi-bin/repos/web/site
I'm not sure why this isn't working...
so far I have tried the following.
I opened the repository from the cli, and had the server run in the background
fossil server site.fossil &
I though maybee the file should have been inside the main repo directory, not inside a sub directory, so I moved it... it now lives in
/home/homeserver/repos/site.fossil
I tried creating an alias to the file in apache
Alias /home/homeserver/repos/web/site.fossil /home/homeserver/www/repos
When I browse to
www.homeserver.net/repos/site
I get nothing, but going to
www.homeserver.net/repos/site.fossil
will attempt to downloaded the file (which is a binary)
so I think I'm getting somewhere, but I'm not sure what I'm missing.
I've used fossil before, but I ran it as a local server, and started it up as and when I needed it.
I'm running it like this so as I can eventually push the site out to a live VPS (maybe even finish up hosting the fossil site on the VPS also).
ps I really liked fossil when I used it before, and loved the whole integrated wiki and bug tracker, and the fact I could simply copy the file to my external drive to do a backup. Personally don't really want to change to something else, but if I have to....
thanks in advance.
David
Edit: trying other options.
So I thought I would try the single repository method shown on the fossil page, so adjusted my cgi script accordingly.
Now when I navitage to : www.homeserver.net/cgi-bin/fos_repo.cgi I get the following message returned
SQLITE_CANTOPEN: cannot open file at line 30276 of [f5b5a13f73]
SQLITE_CANTOPEN: os_unix.c:30276: (21) open(/home/homeserver/repos)
however if I ssh to the server an start it manually with
fossil server site.fossil
I can get to the server with www.homeserver.net:8081
So I either have a problem with my SQLite usage in apache or something else wrong. Plesse help
Solution
So for reasons of simplicity I've decided that using a single cgi file for each repo is what I am going to go with.
My initial directory structure was as follows:
/home/homeserver/www
/home/homeserver/www/repos
/home/homeserver/www/repos/web # for web site development
/home/homeserver/www/repos/dev # for other development
I think part of my problem was that I was hoping that having the directory: pont to my repos/ location fossil would find the site.fossil file (located in repos/web) and the dev.fossil file (located in repos/deb).
Obviously this didn't work.
The reason I wanted it too look like this was for separation of the information on my system.
For some reason I had decided that pointing fossil as repos/ would give me a nice fossil style front page and links to my repositories automatically. However After having used the directory: version and getting the following error message
Unable to find or open the project repository
I realised that I was still going to need to write my front page to the repositories, and that my expectation was a little too much.
So I've decided to run with a single cgi file pointing to each repo that i need to make.
Instead of
www.homeserver.net/cgi-bin/repos/web/site
try
www.homeserver.net/cgi-bin/repos.cgi/index
Reading your ( very long ) question again, I suggest trying
www.homeserver.net/cgi-bin/fos_repos.cgi/index

Wordpress: Installing Plugin error -> Could not create directory

this has been asked again and again, but none of the solutions I found actually works for me. I'm testing a new server (Ubuntu server 14.04) and have gone through the whole installation process of the various required software. So far I can access my internal web page via
http://myInternalIP/wordpress/
I added there a dummy post and it looks ok.
Now I wanted to add a plug-in, but I'm having major trouble with that.
So here is what I have done.
I added a new user called ftps that has it's home dir in
/usr/share/wordpress/
and this is part of
~$ groups ftps
ftps : ftps www-data
When I try to add a plug-in, all goes well until I get the following message:
Downloading install package from https://downloads.wordpress.org/plugin/wordpress-importer.0.6.1.zip…
Unpacking the package…
Could not create directory.
Return to Importers
So the general answer I found in many posts, is that this is a permission issue. Fine. Well I'm fighting with the permission issues since xx hours. So here is a brief summary of what I've done:
I've tried changing ownerships and groups around (www-data, my user name, ftps). It did not work.
I've changed permissions to 777 to all the wordpress directory in /usr/share/wordpress.
I've tried the following commands:
sudo -u helder touch /usr/share/wordpress/wp-content/plugins/test.txt
sudo -u ftps touch /usr/share/wordpress/wp-content/plugins/test.txt
sudo -u www-data touch /usr/share/wordpress/wp-content/plugins/test.txt
All of these commands generated a file succesfully in the specific directory.
My feeling is that permissions are not the issue, but I might be wrong... what should I look out for?
Thanks
I had to enable write_enable=YES in the file vsftpd.conf.
If you are using vsftpd as your FTP server and have enabled passive connections, you need to add pasv_promiscuous=YES to /etc/vsftpd/vsftpd.conf.

Pushing my Mercurial Repository through HTTP with Apache and Windows

So I have managed it. I can clone mercurial-repositories remotely using HTTP to my Windows Server 2003 machine and the ipaddress from that machine. Although I did deactivate IIS6 and am using Apache 2.2.x now. But not all works right now...darn! Here's the thing:
Cloning goes smooth! But when I want to push my changes to the original repository I get the message "cannot lock static http-repository". On the internet I get to read several explanations that Mercurial wasn't designed to push over HTTP connections. Still, on the Mercurial website there's something about configuring an hgrc file.
There's also the possibilty to configure Apache to host via HTTPS (or SSL). For this you have to load the module enabling OpenSSL and generating keys.
Configuring the hgrc file
Just add "push_ssl = false" under the [web] line. But where to put this file when pushing your changes back?! Because I placed it in the root of the server, in the ".hg" directory, nothing works.
Using SSL/HTTPS with Apache
When I try to access 'https://myipaddress' it fails, displaying a dutch message which would mean something like "server taking too long to respond". Trying to push also gives me a dutch error message which means about the same. It can not connect to my server via https although I followed the steps exactly at this blog.
I don't care which of the above solutions will work for me. Turns out none of them work so far. So please, can anyone help me with one of the solutions above? Pick the easiest! Help will be greatly appreciated, not only from me.
Summary
-Windows Server 2003
-Apache 2.2 with OpenSSL
-Mercurial 1.8.2
-I can clone, but not push!
Thank you!
Maarten Baar(s)
It seems like you might have apache configured incorrectly for getting it to do what you want. Based on your question it sounds like you have a path (maybe the root of the server) pointing to the repository you want to serve.
Mercurial comes with a script for this exact purpose, in the latest version it is hgweb.cgi. There are reasonably good instructions for setting it up on the mercurial site. It should allow both cloning and pushing. You will need the push_ssl=false if you will not be configuring https and also an allow_push line which will let certain users, or all (*) push to the repository. But all that should be part of the setup docs.

Python script runs via apache when permissions are 755 but gives Error 500 when 777?

I uploaded a basic python script to my shared hosting at Dreamhost, and changed the permissions to 777. It ran fine from the shell (via SSH) but would display a 'Server Error' when called from the browser.
In the error.log, the error was 'Premature end of script headers'.
I wrote to DreamHost, who (surprisingly quickly) replied by changing the permissions to 755, and the script started working properly in apache (I could see the output in the browser).
But this doesn't seem right - how can adding extra lenient permissions break anything from functioning?
Allowing anyone to edit a CGI script means that it would be easy to insert a backdoor into the system. httpd is correctly disallowing a suspect program to be run.