Where can I find the source of the soap server in rebol mentioned here:
http://www.rebolplanet.com/zine/rzine-1-02/#sect6.
the link http://www.compkarori.co.nz/reb/discordian.txt doesn't work any more.
It's also now in my GitHub repo
https://github.com/gchiu/Rebol2/blob/master/Scripts/discordian.r
Whenever you have this sort of question, try pasting the URL into the search box of archive.org (The Internet Archive).
In this case, a copy of the file was snapshotted in 2004:
http://web.archive.org/web/20040205210622/http://www.compkarori.co.nz/reb/discordian.txt
(You might let the operators of the site know the %reb/ directory is missing, since the others in the set are still there.)
Related
That's a quite puzzling problem. I've multiple MediaWiki installations. In this specific case: Version 1.34.
Now I can login to all of these MediaWikis. Everything works fine.
Now I can access all of these MediaWikis via API --- EXCEPT ONE. The strange thing is: All of them are configured almost identical. I even copied the configuration from one wiki where everything was working to the second wiki.
To be more precise. If I send ...
/wikiA/api.php?action=query&meta=tokens&format=json&type=login
... I get a very reasonable answer, e.g.:
{"batchcomplete":"","query":{"tokens":{"logintoken":"37ec2e690eeb48a10ac66b2ccbca2b576000f9f4+\\"}}}
If I send ...
/wikiB/api.php?action=query&meta=tokens&format=json&type=login
... I get the following answer, e.g.:
{"error":{"code":"readapidenied","info":"You need read permission to use this module.","*":"See http://xxx.xxx.xxx.xxx/wikiB/api.php for API usage. Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/mailman/listinfo/mediawiki-api-announce> for notice of API deprecations and breaking changes."}}
This can be reproduced using any web browser.
Q: What could be the reason that on this wikiB I even can't access the normal login module? It can't be the configuration. It's almost completely identical. It can't be the source code. I ran a diff on the PHP files and found no significant differences. What could be wrong here? It seems it must be something with the database. But how do I approach this? Does anyone have an idea? I would appreciate it very much if you could help!
I analyzed the data base: No difference. I did more research using google: And found a bug report.
It's a bug in MediaWiki. They provided an official software release with THAT kind of bug.
It turnes out there is a 1.34.0 version and a 1.34.1 version. My WikiA has 1.34.1 while WikiB had 1.34.0. After copying this one single file includes/api/ApiQuery.php from WikiA to WikiB and everything worked fine.
https://gerrit.wikimedia.org/r/c/mediawiki/core/+/580097/
I only found api to get issue list, issue content, issue comments list and content, no issue content edit history, no issue comments edit history.
No, this cannot currently be done purely from the API.
However, if we reverse engineer the way GitHub loads past edits in the web interface, and do a bit of scraping, we can accomplish the same thing without the API. Unfortunately, this means that we don't have the reliability of an API - GitHub's web interface is liable to change at any time, breaking our code. But it's better than nothing!
So, first we need a log of all the edits for a comment. Let's do this with the comment https://github.com/seisvelas/crypsee/issues/1#issue-874033952 (from a test repo provided by the gentleman who set the bounty on this question). On order to get a log of this issue's comments, we will need to base64 encode the issue number with '05:' then the word 'Issue' at the beginning. Why '05:'? I have no idea. But it's always there and it won't work with out it. So we'll be base64 encoding the string "05:Issue874033952", which gives us MDU6SXNzdWU4NzQwMzM5NTI=
Great, now we insert MDU6SXNzdWU4NzQwMzM5NTI= into this URL scheme: https://github.com/_render_node/{BASE64 ENCODING HERE}/comments/comment_edit_history_log, resulting in a link to https://github.com/_render_node/MDU6SXNzdWU4NzQwMzM5NTI=/comments/comment_edit_history_log
Following that link, we see an edit history, but not the contents of the edits themselves:
However, this gives us the information we need! If we look at the HTML, we see that all edits prior to the current edit are defined as buttons with a link to that edit:
<button
type="button"
class="btn-link dropdown-item p-2"
role="menuitem"
data-edit-history-url="/user_content_edits/MDE1OlVzZXJDb250ZW50RWRpdElzc3VlRWRpdDo1MzIxODcxNzE="
>
The URL pointed to by the data-edit-history-url is the same URL loaded via the browser's networking tab when clicking to view a past edit in the web interface!
Unfortunately, if you attempt to view that page on it's own, you get a 404. It is intended to be viewed only from the web interface. But that's no problem, just go to the web interface, view one of the edits, and copy the headers it sends along. In my case I'm using Chromium, so I just find the request to the edit in my networking tab, right click and hit 'copy as Fetch request (nodejs)' and viola, with those headers I'm good to go!
For example, for the comment we've been using this whole time, I make that request and get back a bunch of HTML. The content of the original edit is near the end:
<ins><p class="rich-diff-level-zero">before edit</p></ins>
There it is! I could write a script to automate this, but then I'd be doing everything for you :3 Suffice it to say that with a day's work of cleverly organized scraping, this is roughly what you must to in order to view these revisions. If someone does make such a tool, the OSINT community will surely be immensely grateful!
To see the features of github api, it is better to read the following link
The best source to get the answer:
https://docs.github.com/en/rest/reference/issues
Check the issues you mentioned, ie issue comments, edit history issue, etc. in the link above
As far as I saw it is possible to receive issue comments but I did not see a section for edit history issue
I also suggest you see the following links for the edit history issue:
https://github.com/isaacs/github/issues/954
I have a config.yml file that has some comments like:
#Thats the message when someone joins to the server
Message: Hello User
But when I save the config.yml file and open again it vanishes and can't be saved.
I tried to search some api for this problem but could not find.
I dont want to use
saveDefaultConfig();
or
getConfig().options.copydefault(true);
saveConfig();
because I dont want to save this trough code. I want to save this with api.
What api should I use?
I would recommend you to use the Spigot API, on my opinion it's the better one
Both API's only support ONE comment in the header section of the config. If you want to save more than one comment you should code your own YAMLConfiguration
To save this one header you should try
getConfig().options().header("Your Comment");
getConfig().options().copyHeader(true)
You can seperate the comment with \n into mutliple lines
You can create a config.yml inside your src folder which will be included into the jar. You can set your default values and any comments by editing this file in your IDE. All you have to do in your main class is calling saveDefaultConfig(); in your onEnable(). After that, you can use getConfig() to access your configuration.
Thats sadly not possible when using the given Tools!
You need to manually write your own ConfigManager where you can do that!
Here is what I got: https://mega.nz/#!lsIkVYhD!knZr5DBmbvPyJh8ONeNx4pfb7Q0C9yuIp6FHiyJmhBw
It may however have a few bugs :D
As stated, the Yaml API provided with Bukkit will strip comments when it is written to. So you can comment your resource config file but comments are lost when it is saved or re-written by the API.
My preferred Yaml handler is SimplixStorage. It handles comments well both at the top and anywhere inside a yaml file. I also found SimpleConfig which allows you to add header, add comments (which are not removed on file saves), as well as add to config with your code.
I got problem with my company internal extension. They don't want to publish it, as it does gather data on external server. So I need to host it myself... but would like not to lose ability of autoupdate.
As far as I read I need to use update_url in manifest, but nothing more is said in Opera documentation...
"update_url": "http://path/to/updateInfo.xml", - as it is said in documentation page
Ok... and what should I put in that xml? Will it autoupdate or just notify users about new updates? Where do I put rest of updated files?
I tried to concat Opera itself about this question, but they don't give any contact information except something like if you have problem, ask on stackoverflow... so here I am.
If it does not work, I was thinking about really BAD method, using unsafe-eval and keeping newest version in local storage... but would rather like to avoid that.
In general the behavior is the same as for Chrome. You can base on this document: https://developer.chrome.com/extensions/autoupdate
Once I have my renamed files I need to add them to my project's wiki page. This is a fairly repetitive manual task, so I guess I could script it but I don't know where to start.
The process is:
Got to appropriate page on the wiki
for each team member (DeveloperA, DeveloperB, DeveloperC)
{
for each of two files ('*_current.jpg', '*_lastweek.jpg')
{
Select 'Attach' link on page
Select the 'manage' link next to the file to be updated
Click 'Browse' button
Browse to the relevant file (which has the same name as the previous version)
Click 'Upload file' button
}
}
Not necessarily looking for the full solution as I'd like to give it a go myself.
Where to begin? What language could I use to do this and how difficult would it be?
Check if the wiki you mean to talk to supports XMLRPC, because if it does it should be a snap. I wrote a tool called WikiUp to solve a similar problem (updating a delineated section on a wiki page).
If you're writing in C#, the WebClient classes might be a good place to start. I bet people could give more specific advice if you mentioned which wiki platform you are using, and whether it requires authentication, though.
I'd probably start by downloading fiddler and watching the http requests from doing it manually. Then you could use some simple scripts and regexes to build your http requests for automating the process.
Of course, if your wildly lucky, your wiki would have a backend simple enough that you could just plug them into its db directly. :)
You might find CoScripter useful -- it's a Firefox extension that allows you to automate tasks you perform on websites. I'm not certain how you'd integrate this with the list of files you're changing on your local system, but it can certainly handle the file uploading through a web form.
Better bet is probably using cURL or a similar HTTP library with your programming language of choice. If you're on *nix, you can use the cURL commandline program inside your shell script to get this done fairly easily. (Like #jsight said you will need to analyze the actual forms you're using on the webpage, using Fiddler or just looking at the form elements and re-creating the POST through cURL.)