How can I setup Apache Mina to save uploaded files to a database? - apache-mina

I'm looking for example or any documentation that would help me get files to go into the database instead of the file system.
I tried setting it up like so:
val sshd = SshServer.setUpDefaultServer();
sshd.setPort(config.getSftpPort());
sshd.setKeyPairProvider(new SimpleGeneratorHostKeyProvider(new File(config.getHostFile())));
sshd.setSubsystemFactories(Collections.singletonList(new SftpSubsystemFactory()));
sshd.setPublickeyAuthenticator(new AuthorizedKeysAuthenticator(new File(config.getAuthKeysPath())));
sshd.setFileSystemFactory(new VirtualFileSystemFactory());
sshd.start();
log.info("SFTP server started");
Which works, it just saves it to the root directory. In the documentation it says that there's a way to get it to save to a database instead. No examples or docs on how to do this are provided. The documentation for this project is complete garbage. :(

Related

marklogic "getting started" app returns 404 on mac

I'm running MarkLogic 8 (developer edition) on Mac OS 10.10.1.
I'm a beginner with ML, and I'm reading the "Getting Started" material in the online docs, in particular the section "Sample XQuery Application that Runs Directly Against an App Server."
I created the "TestServer" app server just fine, following the instructions. I then copied and pasted the text for the four XQuery files in the exercise, load.xqy, dump.xqy etc.
My local copies of the four .xqy files are under ~/Library/MarkLogic/Apps/Test, per the instructions. Read and execute permissions are open along the entire filepath, down to the .xqy files themselves.
When I request http://localhost:8005/Test/load.xqy, as instructed, I get a 404 Not Found response.
lsof -i :8005 indicates that MarkLogic is indeed listening on port 8005.
I checked the TestServer configuration against the instructions, disabled and re-enabled TestServer, stopped and re-started ML--always with the same result: 404 Not Found.
I haven't been able to find anything in either the ML mail archives or Stackoverflow to get me past this sticking point.
Any ideas or suggestions would be very much appreciated. Thank you!
This seems like a permission issue. Does it work when you run it as the admin user?
Have check to make sure the files are loaded into the modules database?
Also check the permission got set with the correct role for those file.
Check to see that user that is running the app has the role that you used when setting permission on those file.
This worked for us:
In the TestServer configuration instead of just putting Test in root field, put Apps/Test/ which is the location of the 4 files (load.xqy,dump.xqy, update-form.xqy and update-write.xqy relative to the MarkLogic installation directory -- in our case, centos, this was at /opt/MarkLogic/)
And then issued this command
chmod +r *.xqy
If you follow all the instructions correctly just remove the Test from the url. If yours is "http://localhost:8005/Test/load.xqy" make it "http://localhost:8005/load.xqy"

can't open fossil repo over web

I've been strugling for a couple of days with this problem, but can't seem to fix it, I think I'm almost there.... but... not quite :(
This is where I am at.
I'm on a headless debian server, running virtualmin / webmin for creating my domains / users etc. I don't know if this will mess things up, but I'm happy to modify the config files manually (via webmin or via ssh/vim).
I am attempting to run fossil as a cgi service over apache.
its an internal site, named as homeserver.net I can reach the default pages just fine, and add in and create links etc as I want to.
Please note that the solution to my problem is at the end of the question.
so the files are located on disk at, which tallys up with my apache document root
/home/homeserver/www
I would like to run fossil to have both the internal site, and later on and dev work that I practice on in separate files. So I created a new directory for these repositories.
/home/homeserver/repos/web/site.fossil
/home/homeserver/repos/dev/ [no repository yet!]
reading the instructions on the fossil page I have inserted a short cgi file called 'fos_repo.cgi' that reads as.
#!/usr/bin/fossil
directory: /home/homeserver/repos
notfound: http://www.homeserver.net/site404.htm
when I open the link to
www.homeserver.net/cgi-bin/fos_repo.cgi
I get redirected to the 404 page that I have written. So the script is clearly being read and working.
From reading the fossil pages I understand that I should be able to use the following link to open/access the repo.
www.homeserver.net/cgi-bin/repos/web/site
I'm not sure why this isn't working...
so far I have tried the following.
I opened the repository from the cli, and had the server run in the background
fossil server site.fossil &
I though maybee the file should have been inside the main repo directory, not inside a sub directory, so I moved it... it now lives in
/home/homeserver/repos/site.fossil
I tried creating an alias to the file in apache
Alias /home/homeserver/repos/web/site.fossil /home/homeserver/www/repos
When I browse to
www.homeserver.net/repos/site
I get nothing, but going to
www.homeserver.net/repos/site.fossil
will attempt to downloaded the file (which is a binary)
so I think I'm getting somewhere, but I'm not sure what I'm missing.
I've used fossil before, but I ran it as a local server, and started it up as and when I needed it.
I'm running it like this so as I can eventually push the site out to a live VPS (maybe even finish up hosting the fossil site on the VPS also).
ps I really liked fossil when I used it before, and loved the whole integrated wiki and bug tracker, and the fact I could simply copy the file to my external drive to do a backup. Personally don't really want to change to something else, but if I have to....
thanks in advance.
David
Edit: trying other options.
So I thought I would try the single repository method shown on the fossil page, so adjusted my cgi script accordingly.
Now when I navitage to : www.homeserver.net/cgi-bin/fos_repo.cgi I get the following message returned
SQLITE_CANTOPEN: cannot open file at line 30276 of [f5b5a13f73]
SQLITE_CANTOPEN: os_unix.c:30276: (21) open(/home/homeserver/repos)
however if I ssh to the server an start it manually with
fossil server site.fossil
I can get to the server with www.homeserver.net:8081
So I either have a problem with my SQLite usage in apache or something else wrong. Plesse help
Solution
So for reasons of simplicity I've decided that using a single cgi file for each repo is what I am going to go with.
My initial directory structure was as follows:
/home/homeserver/www
/home/homeserver/www/repos
/home/homeserver/www/repos/web # for web site development
/home/homeserver/www/repos/dev # for other development
I think part of my problem was that I was hoping that having the directory: pont to my repos/ location fossil would find the site.fossil file (located in repos/web) and the dev.fossil file (located in repos/deb).
Obviously this didn't work.
The reason I wanted it too look like this was for separation of the information on my system.
For some reason I had decided that pointing fossil as repos/ would give me a nice fossil style front page and links to my repositories automatically. However After having used the directory: version and getting the following error message
Unable to find or open the project repository
I realised that I was still going to need to write my front page to the repositories, and that my expectation was a little too much.
So I've decided to run with a single cgi file pointing to each repo that i need to make.
Instead of
www.homeserver.net/cgi-bin/repos/web/site
try
www.homeserver.net/cgi-bin/repos.cgi/index
Reading your ( very long ) question again, I suggest trying
www.homeserver.net/cgi-bin/fos_repos.cgi/index

How to check if file exist at FTP server in Objective-C

In this app, I am using FTP server to download and upload images to the server. Now, I want to check if file exist at server. I would have the file name to check on the server. Currently, I am using FTPManager class to do all uploading and downloading. Its working fine. But I couldn't find any solution to check if file exist at server
Please help me in this.
Assuming you're talking about this FTPManager class, it seems to me like you should be able to create a new FMServer instance with the path to the folder you want, then call the manager's -contentsOfServer: method to get an array of dictionaries containing file information at that path.

HSQLDB 2.2.9: understanding server.properties file

I am running hsqlddb 2.2.9 on ubuntu Linux but I am struggling to understand the server.properties file. With hsqldb installed under /usr/local, when I start the server with java org.hsqldb.server.Server from the place I put the server.properties file, suppose server.properties is:
server.database.0=file:/usr/local/hsqldb-2.2.9/hsqldb/hibernate/hiberdb
server.dbname.0=hiberdb
Then I get a subdirectory hibdernate with everthing in it labeled
hibderdb.{log,script,properties,tmp}
with hibderdb.tmp an empty directory. So far so good.
However I cannot understand hyper sql db's logic in the following cases:
Suppose server.properties is:
server.database.0=file:/usr/local/hsqldb-2.2.9/hsqldb/hibernate
server.dbname.0=hiberdb
then the hibderdb alias is ignode and I get files
hibernate.{log,properties,script,tmp}
in the same directory as the server.properties file (i.e. in the current directory).
or even the following:
server.database.0=file:/usr/local/hsqldb-2.2.9/hsqldb/hibernate/
server.dbname.0=hiberdb
then all I get is a hibernate subdirectory with no hibderdb.* files and instead I have files
hibernate/{.log,.properties,.script,.tmp}
(these are hidden Unix files, and again the alias property dbname is ignored).
The HSQLDB documentation has an example:
http://hsqldb.org/doc/2.0/guide/listeners-chapt.html#lsc_server_props
server.database.1=file:/opt/db/mydb
server.dbname.1=enrolments
Is this example outdated or wrong?
Thanks,
Jason Posit
The server.dbname.0=hiberdb is totally unrelated to the other line in the properties file.
This 'alias' is used when accessing the server from a client.
The example in the documentation is correct. The external database client is dealing with a database alias it knows as 'enrolments', and does not need to know where you store the files on your server.
Access to your 'hiberdb' client is always via the URL such as jdbc:hsqldb:hsql://localhost/hiberdb no matter where you put the files.

Joomla 1.5 Site Backup Strategy

I would like to make a complete backup of my whole joomla 1.5 based site from time to time. How would this ideally be done? Are there any common pitfalls? Not that I only have ftp access to the hosting server. Is there a step by step tutorial somewhere? I am using latest Joomgallery and Kunena 1.0.9 (Legacy mode).
Maybe there is a good way to automate this?
There's two parts of the backup you have to worry about, the database and the files.
The first part is the database. It can be backed up using something like phpMyAdmin. If you don't have this available on your server already, it's not too hard to upload and get it going yourself. From there, you can just Export the entire database to a gzip file.
The second part is the code and uploaded files. The code base shouldn't change too often, so you could probably just make one backup of this. There's a number of ways. The simplest is to just download the entire folder via FTP, though if you're Linux, I'm sure someone will know a single command line to get all the changed files (rsync?).
The database is the main thing you have to worry about though: everything else should be able to be rebuilt just by reinstalling.
I think this: http://www.joomlapack.net/ is what you need. I use it myself and it works like a charm. Both for backups and for moving my Joomla installations from developer sites and to the real site.
get an FTP synchronisation tool and keep an up-to-date copy of your site locally. Then you could run the batch script
mysqldump -hhost -uuser -p%1 schema > C:\backup.sql
to create a backup of your mysql tables at various points in time.
edit
you would have to have MySQL Server installed on your local machine and path to its bin directory in you PATH, in order to run the mysqldump command without much hassle. -p%1 would take the command-line provided password, as you wouldn't want to store passwords in your batch script.
If you only have FTP access you are in a bit of a problem, as beside all files you'll also have to backup the database. Without accessing the database, a full-backup won't do you any good.
Whatever backup strategy you choose - be sure it can handle UTF-8 correctly. Joomla 1.5 stores all content with UTF-8, even when the database charset is set on 'iso-5589-1' - so when the backup solution is detecting the database charset, some characters like € or é will result in "strange" ¬ / é - not really what you'll want.
I absolutely endorse using Joomlapack - it works great. The optional remote tools allow you to initiate the backup from a Windows desktop machine - it performs the backup and downloads it. The remote has a scheduler, and you can also set it off to backup and download a list of sites.
Joomlapack also provides a file "kickstart.php" which you copy to your empty server account along with the backup, which automates the restore procedure. You do have to create an empty database with PHPMyAdmin or similar, and you are given the opportunity to supply the database parameters (host, database, username, password) during the process.
One pitfall I did run into with this though is that some common components can have absolute URLs in their configuration - e.g. SOBI2, Virtuemart. It's then just a matter of finding the appropriate configuration file, editing it and re-uploading it.
Another problem was one archive file (either ZIP or their JPA format) got a filename with a "?" character in it (from a Linux server) and this caused a bit of a problem trying to install it locally on a Windows WAMP stack - the extract process on the ZIP file failed, and it stopped the process completing cleanly.
I suggest using automatic backup service by http://www.everlive.net
Update:
Ok, here is some more information. EverLive.net is a website where you can create a free account. Enter your website details and you are ready to take your backups withe just one click. Restore is also possible in the same way.
Further you can use automatic backup option to take automatic backups at defined intervals. Other than that, you can use the website health check service to inform you if your website is not available.