Accessing file system resources using the Apache Sling Resource API - apache

I want to access the file System using the sling resource API . I can access the JCR nodes but not don't know how to access the file System resource . How the resourceResolver will resolve the resource objects for filesystem ?

To access filesystem resources a Sling Resources you need to install the org.apache.sling.fsresource bundle and create at least one OSGi configuration to activate it.
See the docs at
http://sling.apache.org/documentation/bundles/accessing-filesystem-resources-extensions-fsresource.html

Related

Mule - Copy the directory from HDFS

I need to copy the directory (/tmp/xxx_files/xxx/Output) head containing sub folders and files from HDFS (Hadoop distributed file system). I'm using HDFS connector but it seems it does not support this.
It always getting an error like:
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): Path is not a file: /tmp/xxx_files/xxx/Output/
I don't see any option is HDFS connector for copying the files/directories inside the path specified. It is always expecting file names to be copied.
Is it possible to copy a directory head containing sub-folders and files using the HDFS connector from MuleSoft?
As the technical documentation of the HSFS connector on the official MuleSoft website states, the code is hosted at the GitHub site of the connector:
The Anypoint Connector for the Hadoop Distributed File System (HDFS)
is used as a bi-directional gateway between applications. Its source
is stored at the HDFS Connector GitHub site.
What it does not state, that there is also a more detailed technical documentation available on the GitHub site.
Here you can also find different examples how to use the connector for basic file-system operations.
The links seem to be broken in the official MuleSoft documentation.
You can find the repository here:
https://github.com/mulesoft/mule-hadoop-connector
The operations are implemented in the HdfsOperations java class. (See also the FileSystemApiService class)
As you can see, the functionality you expect is not implemented. It is not supported out-of-the-box.
You can't copy a directory head containing sub folders and files from HDFS without any further effort using the HDFS connector.

Is it possible to solely use AWS S3 for nextcloud/opencloud drive

I'm trying to configure Nextcloud to use S3 as the sole path of all files and therefore not hold any files locally.
I guess this can be done but only within a subdirectory. Would it be possible to do it at the root path? It seems the External Storage configuration requires a folder to be entered and / does not seem to be valid.

Apache file upload (resource PUT DELETE) without CGI/PHP

Is it possible to configure Apache to support CRUD operations on file resources? GET works out of the box, how can you make PUT and DELETE work?
I would need to upload a file by HTML form and/or XMLHttpRequest2.
No PHP. No CGI. Just plain Apache by configuration.
Is this supported in other web servers? I'm trying to find a static REST interface for file resource management without CGI/PHP/Connector/Reverse Proxy/FTP server.

How to access system properties from a Tomcat app deployed on Cloudbees?

I want to run a Tomcat app in Cloudbees. This app accesses some private and confidential properties from the file system. How could I access a file system on Cloudbees? Please note that it should be highly protected, e.g. 700 or similar.
Regards,
Marco
RUN#Cloud platform don't provide a persistent (nor distributed) filesystem. So you can't use it to as canonical store for those files, but need to use an external file store to match your security requirements, and copy them as application is starting (or lazy-load) to java.io.temp directory. As files are stored on RUN#Cloud there is no security issue as your server instance is fully isolated, and files will be deleted after application undeployed/passivated
So you can use Amazon S3 or comparable to store files
Another option is for you to attach properties to the RUN#Cloud instance as configuration parameters, and access them as System properties. See http://wiki.cloudbees.com/bin/view/RUN/Configuration+Parameters
If they data is modest in size - you could consider using properties - using the CLI you can set them using
bees config:set propertyName=value
you can then access that as a System property (for example) in your application. The properties themselves are stored encrypted by cloudbees.
I've actually moved to OpenShift since then and I solved the problem. Thank you for your answers

Accessing Local Media File to Play in FlowPLayer

I want to access local media file say .mp4 file to play in FlowPLayer on Firefox browser..
My application is based on JSF and RF3.3 with JBoss server.
Problem is in my backing bean say I have written a file name as test.mp4 and the same is being present in WEB-INF folder..FlowPlayer will access this file using:-
http://IP/ContextPath/WEB-INF/test.mp4
But now say suppose I have a file placed in my D: drive on my system. The local server is running on my system.I want to access the file kept in D: drive and play it in flowplayer...
FlowPlayer always append http://IP/ to the file name and as such it won't play the media file..
Is there any way out to allow flow player to access local file on the system...
I figured that it can be done using Apache...But how/??...
The component accepts a URL that must be accessible from the client browser, thus a url like file:///C:/resources/foo.mp4 would not work. The resource file you are trying to reference must be accessible from a web context. That is not to say that you can't store the file resources on the D: of your machine, but you would need a web server like Apache to access that folder location as a web context folder. It can be configured to do this, but I will not go into the details of how to do this, if you have trouble with that then you should post a question to the ServerFault StackExchange site for help with this.
One thing to keep in mind is that your web application is likely configured that any resources within the WEB-INF folder of your project is likely set to be the context path of your application. Thus if you you were to place your MP4 file in your web app (i advice against it, those files are enormous), then it would be accessible from http://site:port/applicationcontext/resources/foo.mp4 but on disk it is WEB-INF/resources/foo.mp4.
The best way that I set this up is to set up an Apache front end that is listening for web traffic on the specific port, then using the mod_jk module you can have Apache forward requests for resources at http://site:port/applicationcontext/ to your application server on the AJP port. I like this setup because I can keep large static resources at the ROOT context of the web server, as well as protect my application server by keeping it completely behind a firewall and inaccessible from the outside. The application server can only be accessed through the Apache web server meaning increased security. For more information on this type of setup, see this example guide on how to setup Apache Web Connector with Tomcat. http://tomcat.apache.org/connectors-doc/webserver_howto/apache.html