What are advantage of Isilon OneFS File System Access Api over accessing the file system using SMB or NFS? - nfs

I want to create some utility that read/write the files with permission (ACL) from/to Isilon server. This utility will access the server either on LAN or VPN. Here my main concern is to achieve performance too for file/folder enumeration and copy files data with attributes/acl/timestamp too.
As I know, you can access the file storage using SMB if server is on Windows server else NFS if server is on unix/linux.
so I want some basic information that in what scenarios OneFS Api's are better than accessing directly over NFS/SMB.

I'm an Isilon admin at a big company. TL;DR Its just another way to access your files
Most of my clients-systems access their files using SMB, and a smaller number use NFSv3. Generally NFS is best suited to Unix clients, and SMB is best with Windows, but you can mount SMB with Linux and you can run an NFS client on Windows. Probably the biggest advantage with NFS/SMB is they are commonly used protocols that most IT admins are familiar with.
API access would be the best approach if you are already writing custom software, or implementing a web framework that was geared toward REST API integration. If implementing using REST API, then Isilon's API access might be the easiest choice.

Related

Import and export files between D365 Business central and VM

I'm working on a project where I've found myself in a situation where i need to import .txt files into d365 business central from a repository in a distant Virtual Machine.
Is it possible to establish such connexion in both ways (export/import).
I've done some research and i think it's impossible to do it using power automate or logic apps, so it would be perfect if you helped me to find another solution that will actually work.
If you have BC on prem and you have access to this VM through your network then you can open and load the file through the File Management codeunit.
If not then you could use the Azure On Premise Data Gateway.
https://learn.microsoft.com/en-us/data-integration/gateway/service-gateway-onprem
You can also store the file in an online storage or make it available through an API, FTP, etc.

NFS setup for file transfer to SAP PI middleware?

I am trying to set up a new architecture for Middleware using SAP PI/PO. The problem is to determine a right mechanism for pulling file from other servers (Linux/Windows etc..)
Broadly, 2 different approach are reviewed i.e. using a managed file transfer (MFT) tool like Dazel vs using NFS mounts. In NFS mount all the boundary application machines will act as server and middleware machine will be client. In the MFT approach a agent will be installed at boundary servers which will push files to middleware. We are trying to determine advantage and disadvantage of each approach
NFS advantages:
Ease of development. No need for additional tool related to managed file transfer
NFS disadvantages:
We are trying to understand if this approach creates any tight coupling between middleware and boundary applications
How easy it will be to maintain 50+ NFS mount points?
How does NFS behave in case any boundary machine goes down or hangs?
We want to develop a reliant middleware, which is not impacted by issue at 1 boundary application
My 2 c on NFS based on my non administrator experience (I'm a developer / PI system responsible).
We had NFS mounts on AIX which were based on SAMBA basis told#
Basis told that SAMBA could expose additional security risks
We had problems getting the users on windows and AIX straight, resulting in non working mounts (probably our own inability to manage the users correctly, nothing system inherent)
I (from an integration poin of view) haven't had problems with tight coupling. Could be that I was just a lucky sod but normally PI would be polling the respective mounts. If they will be errorneous at the time the polling happens, that's just one missed poll which will be tried next poll intervall
One feature, an MFT will undoubtly give you NFS can't is an edge file platform where third partys can put files to (sFTP, FTPS).
Bottom line would be:
You could manage with NFS when external facing file services are not
needed
You need to have some organisational set of rules to know which users which shares etc
You might want to look into security aspects enabling such mounts (if things like SAMBA are involved)

Where to begin with managing web servers / business document file management

I've inherited a couple of web servers - one linux, one windows - with a few sites on them - nothing too essential and I'd like to test out setting up back-ups for the servers to both a local machine and a cloud server, and then also use the cloud server to access business documents and the local machine as a back-up for these business documents.
I'd like to be able to access all data wherever I am via an internet connection. I can imagine it running as follows,
My PC <--> Cloud server - access by desktop VPN or Web UI
My PC <--> Web Servers - via RDP, FTP, Web UI (control panels) or SSH
My PC <--> Local Back-up - via RDP, FTP, SSH or if I'm in the office, Local Network
Web servers --> Local Back-up - nightly via FTP or SSH
Cloud Server --> Local Back-up - nightly via FTP or SSH
Does that make sense? If so, what would everyone recommend for a cloud server and also how best to set up the back-up server?
I have a couple of spare PC's that could serve as local back-up machines - would that work? I'm thinking they'd have to be online 24/7.
Any help or advice given or pointed to would be really appreciated. Trying to understand this stuff to improve my skill set.
Thanks for reading!
Personally I think you should explore using AWS's S3. The better (S)FTP clients can all handle S3 (Cyberduck, Transmit, etc.), the API is friendly if you want to write a script, there is a great CLI suite that you could use in a cron job, and there are quite a few custom solutions to assist with the workflow you describe. s3tools being one of the better known ones. The web UI is fairly decent as well.
Automating the entire lifecycle like you described would be a fairly simple process. Here's one process for windows, another general tutorial, another windows, and a quick review of some other S3 tools.
I personally use a similar workflow with S3/Glacier that's full automated, versions backups, and migrates them to Glacier after a certain timeframe for long-term archival.

Amazon S3 WebDAV access

I would like to access my Amazon S3 buckets without third-party software, but simply through the WebDAV functionality available in most operating systems. Is there a way to do that ? It is important to me that no third-party software is required.
There's a number of ways to do this. I'm not sure about your situation, so here they are:
Option 1: Easiest: You can use a 3rd party "cloud gateway" provider, like http://storagemadeeasy.com/CloudDav/
Option 2: Set up your own "cloud gateway" server
Set up a dedicated server or virtual server to act as a gateway. Using Amazon's own EC2 would be a good choice.
Set up software that mounts S3 as a drive. Two I know of on Windows: (1) CloudBerry Drive http://www.cloudberrylab.com/ and (2) WebDrive (http://webdrive.com). For Linux, I have never done it, but you can try: https://github.com/s3fs-fuse/s3fs-fuse
Set up a webdav server like CrushFTP. (It comes to mind because it's stable and cheap and works on any OS.) Another option is IIS but I personally find it's harder to set up securely for webdav.
Set up a user in your WebDav server (ie CrushFTP or IIS) with access to the mapped S3 drive.
Possible snag: Assuming you're using Windows, to start your services automatically and have this work, you may need to set up both services to use the same Windows user account (Services->(Your Service)->[right-click]Properties->Log On tab). This is because the S3 mapping software might not map the S3 drive for all Windows users. Alternatively, you can use FireDaemon if you get stuck on this step to start the programs as a service all under the same username.
Other notes: I have experience using WebDrive under pretty heavy loads, and it seems to work well. Under tons of pounding (I'm talking thousands of files per hour being added to a 5 TB WebDrive) it started to crash Windows. But I'm not sure if you are going that far with it. Also, if you're using EC2, you may not have that issue since it was likely caused by a huge transfer queue in memory and EC2 will have faster transit to S3 and keep the queue smaller.
I finally gave up on this idea and today I use Rclone (https://rclone.org) to synchronize my files between AWS S3 and different computers. Rclone has the ability to mount remote storage on a local computer, but I don't use this feature. I simply use the copy and sync commands.
S3 does not support webdav, so you're out of luck!
Also, S3 does not support hierarchial name spaces, so you cant directly map a filesystem onto it
There is an example java project here for putting a webdav server over Amazon S3 - https://github.com/miltonio/milton-aws

Why not directly connect to SQL servers from client? Why do we need application servers in client-server model?

Many applications use the following model:
Browsers or other clients interact with application servers.
Application servers (web servers or RPC servers) interact with data store servers (SQL servers or non-SQL storage).
For internet applications, they need application servers because they must keep simple feature on data servers for performance. But I can't see why they need application servers on intranet.
For example, can we develop an Adobe AIR application, which directly connect to a PostgreSQL server? I guess we can deploy a center PostgreSQL server which has many stored procedures and set strict permission, and let the Adobe AIR application fetch (and modify) data only by invoking the stored procedure.
Why don't the most of applications choose a simplier solution?
In general, there is no reason why you couldn't get an independent application to talk to a PostgreSQL server directly. Some applications do this and it works fine.
I'm not familiar enough with Adobe AIR to say whether it's possible in this context. In principle, if you can get a PostgreSQL driver, or if you can write your own using TCP sockets (the PostgreSQL network protocol is documented in details in the official documentation), you could certainly connect directly.
This being said, having a form of application server between the end-client and the database server isn't purely for performance.
Web-based development allows the SQL queries to be controlled by the server. Instead of exposing complete SQL access, you expose the features that the client can use. If you need to tweak the queries later (bug, change of data structure, ...), you can do this rather centrally on your application server, without having the need to deploy a new version of the client to each user.
Of course, you can do some abstraction like this user server programming directly, but this isn't suitable for all applications. This may depend on what other features your application needs, for example if it needs to make use of a library programmed in another language. You can use some procedural languages bindings, but it's not always suitable: pl/Python is an "untrusted" language (which may cause security problems) and pl/Java needs a external add-on, for example.
In addition, not all applications are ultimately reserved for intranet usage nowadays. It often makes sense not to restrict yourself to intranet usage when you start designing an application.
I initially started with a direct access design and quickly found it useful to move to an application server where I talked to the DB via web services. Reasons included:
Handling DB restart, local connection loss, client IP address change, etc is much easier when you're talking to the DB over a stateless protocol like HTTP. This is more of an issue for remote workers.
Transactions are clearly demarcated and isolated in server-side transactional methods (I used EJB3 and container managed transactions)
It's much easier to add new clients like a phone app as they can share more of the code and business logic. Stored procedures in the database are very useful, but can be limited and occasionally frustrating.
Some tools/languages don't have built-in tools for talking to PostgreSQL directly, but can easily talk to a RESTful web service with XML or JSON request/response format.
DB admin is easier if you're dealing only with a single application server connection pool
The main downside is of course the extra layer means extra work and extra maintenance.
You can, but...
Browser languages/libraries tend to have poor database support
What happens when someone wants to use this application remotely?
If you're not talking about browser-based applications, then that is exactly what many do. There are plenty of traditional installed client applications talking to a backend database either directly or via a wrapper (odbc/jdbc).