Trac - get programmatically wiki page content behind authentication - apache

I'd like to get the content of a wiki page from my trac (1.0.9) using a script.
My Trac is served through apache httpd and adopts a Basic AuthType.
So I tried to use wget as follows
wget http://my/trac/wiki/MyWikiPage?format=txt --user=<THISISME> --ask-password --auth-no-challenge -q -O -
but I get a 403 error.
HTTP request sent, awaiting response... 403 Forbidden
Is there something wrong? Or in other words, is thre a way to simply fetch remotely a wiki page from Trac (taking authentication into account)? Thx

You could install XmlRpcPlugin and use one of the supported libraries, such as xmlrpclib in Python, to fetch the page.

Related

Ionic2 project with Apache PHP REST service on localhost

I have a PHP REST service and a Ionic2 project that 'out of the box' runs on Node.js localhost:8100. The REST service runs on my computer on localhost:80. When I want to do calls from Ionic2 (Angular2) to my server on localhost I get this error in the browser console:
XMLHttpRequest cannot load http://localhost/app_dev.php/login.
Response to preflight request doesn't pass access control check:
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://localhost:8100' is therefore not allowed access.
The response had HTTP status code 404.
Wat I understand is that this is a CORS issue (Cross origin resource sharing). As I understand A way to solve this would be to change the build script in ionic to point to a front end distribution location in my Apache project and run the whole project from localhost:80. Another solution is to change the 'Access-Control-Allow-Origin' header.
What is the most simple straight forward solution for this problem?
Checked the possible duplicates, and there was an potential ANSWER to which I want to add some more detail:
Since you are dealing with PHP, the following has worked for me by just adding on top of your php script the following:
<?php
header('Access-Control-Allow-Origin: *'); // this!
header('Access-Control-Allow-Headers: Content-Type'); // and this!
//more code here
?>
During development, you might very possibly need to enable CORS in your browser, here's an extension for CHROME
Hope this helps! :)

How to solve HTTPS response 498 when googlebot comes along?

I have an AJAX site leuker.nl and when googlebot comes along the site is started and it will retrieve an XML file from my backend server that contains site text.
The HTTP GET request used to retrieve the file returns a HTTP error 498.
Looking on LINK it explains that is concerns an invalid/expired token (esri) returned by "ArcGIS for Server".
I don't understand this error, I don't even use ArcGIS and never heard of it before.
Andy idea how to solve this?
In the backend I use Apache Httpd 2.4 in combination with Tomcat 8.0. Apache proxy requests to Tomcat through an ajp connector. The XML file requested is directly returned by Apache.

Wget - Problems recursively downloading with authentication

So I'm trying to download the entire domain of a private wiki page. I've saved the cookies in a cookies.txt file and using it with wget for authentication like so:
wget --load-cookies=cookies.txt --recursive --no-parent --convert-links --backup-converted --adjust-extension --limit-rate=500k https://wiki-to-download
It proceeds to download the entire wiki domain. At first glance, it seemed to have worked. I opened up the main page html file locally in my browser but almost all of the links besides the home page are the same: the login page...
I'm guessing it authenticated me once allowing the download of the home page, but then doesn't keep my credentials saved as it retrieves the rest of the pages, forcing it to download the dreaded "Login-required page" for each. How could i avoid this? In other words, how i can i make sure every file gets downloaded correctly, as if i were logged in the whole time?

downloading file using curl

I have a quite simple task:
i need to download file from a web page. In browser, it is done by pressing submit button. Just simple button, press it and you see the pop-up window asking where to save file and so on. Data is sent to server via post method.
I tried POST'ing with curl like: curl -d "foo=bar&....." [URL]
but this request returns the page itself, not the file. And I am quite confused about how to get the file, since I dont know it's adress on server and the only way to get it is to press this freaking button.
Please help
If you use unix-like os system you can use wireshark by simple apply filter "http", or some other software, e.g. tcpdump.
if you under ms windows, fiddler2 is very good tools.
first,
use this kind tools get the accurate information about the tracfic.
then analyze the http request, especially the rquest cookies header.
finally, struct your own request by curl.
the foo=bar&.....
is only the content of the request. you may also attention the header of the request.
or your can post your url, so that other peoplle can help you analyze the stuff.
Use Wireshark or a browser plugin that captures the http request sent on submit, then use curl or, for example, PHP's file_get_contents() to emulate the request.

500 Internal Server error when using curl on an aspx page with SSL

I'm trying to access an aspx webpage using curl but it returns 500 internal server error. It doesn't require any credentials or POST variables I know of, but I think I'm missing something, because when I try to access it from my browser, it does work. The page is just a form with two fields to be filled and POSTed.
curl -L https://my.website.com
Do I need to make any changes to my curl script?
ps. I don't have access to the server or the server's logs
Some things to try and ideas:
trace your manual access with e.g. Fiddler or httpfox or firebug. You might see something more elaborate than you have see already (like a 301/302 response, I assume that you added -L to handle such a possibility?
as it works when you check out the page via a browser, the page might attempt a referrer check and fail miserably because there is no referrer (hence the 500, a server-side error). The dump you created in 1. will show you what to insert with curl's -e option.