Import and export files between D365 Business central and VM - virtual-machine

I'm working on a project where I've found myself in a situation where i need to import .txt files into d365 business central from a repository in a distant Virtual Machine.
Is it possible to establish such connexion in both ways (export/import).
I've done some research and i think it's impossible to do it using power automate or logic apps, so it would be perfect if you helped me to find another solution that will actually work.

If you have BC on prem and you have access to this VM through your network then you can open and load the file through the File Management codeunit.
If not then you could use the Azure On Premise Data Gateway.
https://learn.microsoft.com/en-us/data-integration/gateway/service-gateway-onprem
You can also store the file in an online storage or make it available through an API, FTP, etc.

Related

What are the steps to export a VM using the vmware vcenter 7 rest api

I'm attempting to build some custom automation to handle the import / export of VM's to / from an on-prem vmware cluster.
So far I have authenticated the rest api, can get a VM's info, but I cannot work out how to approach exporting the selected VM.
I believe I'll need to create a download session & iterate through its files, saving them to disk one by one whilst keeping the download session alive, but the documentation seems to skirt around the concept of exporting a VM and focus predominantly on deploying.
Does anyone have an example / list of steps required to achieve exporting a VM via the Rest API?
As of 7.0U2, that functionality doesn't exist in the vSphere Automation (REST) API. Here are the top level VM functions: link
If you're open to using the vSphere Web Services (SOAP) API, there's an exportVM function available: link
If you want to automate VMs import/export I recommend to use OVF Tool / PowerCLI.
I leave you a KB with example https://kb.vmware.com/s/article/1038709

NFS setup for file transfer to SAP PI middleware?

I am trying to set up a new architecture for Middleware using SAP PI/PO. The problem is to determine a right mechanism for pulling file from other servers (Linux/Windows etc..)
Broadly, 2 different approach are reviewed i.e. using a managed file transfer (MFT) tool like Dazel vs using NFS mounts. In NFS mount all the boundary application machines will act as server and middleware machine will be client. In the MFT approach a agent will be installed at boundary servers which will push files to middleware. We are trying to determine advantage and disadvantage of each approach
NFS advantages:
Ease of development. No need for additional tool related to managed file transfer
NFS disadvantages:
We are trying to understand if this approach creates any tight coupling between middleware and boundary applications
How easy it will be to maintain 50+ NFS mount points?
How does NFS behave in case any boundary machine goes down or hangs?
We want to develop a reliant middleware, which is not impacted by issue at 1 boundary application
My 2 c on NFS based on my non administrator experience (I'm a developer / PI system responsible).
We had NFS mounts on AIX which were based on SAMBA basis told#
Basis told that SAMBA could expose additional security risks
We had problems getting the users on windows and AIX straight, resulting in non working mounts (probably our own inability to manage the users correctly, nothing system inherent)
I (from an integration poin of view) haven't had problems with tight coupling. Could be that I was just a lucky sod but normally PI would be polling the respective mounts. If they will be errorneous at the time the polling happens, that's just one missed poll which will be tried next poll intervall
One feature, an MFT will undoubtly give you NFS can't is an edge file platform where third partys can put files to (sFTP, FTPS).
Bottom line would be:
You could manage with NFS when external facing file services are not
needed
You need to have some organisational set of rules to know which users which shares etc
You might want to look into security aspects enabling such mounts (if things like SAMBA are involved)

What are advantage of Isilon OneFS File System Access Api over accessing the file system using SMB or NFS?

I want to create some utility that read/write the files with permission (ACL) from/to Isilon server. This utility will access the server either on LAN or VPN. Here my main concern is to achieve performance too for file/folder enumeration and copy files data with attributes/acl/timestamp too.
As I know, you can access the file storage using SMB if server is on Windows server else NFS if server is on unix/linux.
so I want some basic information that in what scenarios OneFS Api's are better than accessing directly over NFS/SMB.
I'm an Isilon admin at a big company. TL;DR Its just another way to access your files
Most of my clients-systems access their files using SMB, and a smaller number use NFSv3. Generally NFS is best suited to Unix clients, and SMB is best with Windows, but you can mount SMB with Linux and you can run an NFS client on Windows. Probably the biggest advantage with NFS/SMB is they are commonly used protocols that most IT admins are familiar with.
API access would be the best approach if you are already writing custom software, or implementing a web framework that was geared toward REST API integration. If implementing using REST API, then Isilon's API access might be the easiest choice.

Where to begin with managing web servers / business document file management

I've inherited a couple of web servers - one linux, one windows - with a few sites on them - nothing too essential and I'd like to test out setting up back-ups for the servers to both a local machine and a cloud server, and then also use the cloud server to access business documents and the local machine as a back-up for these business documents.
I'd like to be able to access all data wherever I am via an internet connection. I can imagine it running as follows,
My PC <--> Cloud server - access by desktop VPN or Web UI
My PC <--> Web Servers - via RDP, FTP, Web UI (control panels) or SSH
My PC <--> Local Back-up - via RDP, FTP, SSH or if I'm in the office, Local Network
Web servers --> Local Back-up - nightly via FTP or SSH
Cloud Server --> Local Back-up - nightly via FTP or SSH
Does that make sense? If so, what would everyone recommend for a cloud server and also how best to set up the back-up server?
I have a couple of spare PC's that could serve as local back-up machines - would that work? I'm thinking they'd have to be online 24/7.
Any help or advice given or pointed to would be really appreciated. Trying to understand this stuff to improve my skill set.
Thanks for reading!
Personally I think you should explore using AWS's S3. The better (S)FTP clients can all handle S3 (Cyberduck, Transmit, etc.), the API is friendly if you want to write a script, there is a great CLI suite that you could use in a cron job, and there are quite a few custom solutions to assist with the workflow you describe. s3tools being one of the better known ones. The web UI is fairly decent as well.
Automating the entire lifecycle like you described would be a fairly simple process. Here's one process for windows, another general tutorial, another windows, and a quick review of some other S3 tools.
I personally use a similar workflow with S3/Glacier that's full automated, versions backups, and migrates them to Glacier after a certain timeframe for long-term archival.

Amazon S3 WebDAV access

I would like to access my Amazon S3 buckets without third-party software, but simply through the WebDAV functionality available in most operating systems. Is there a way to do that ? It is important to me that no third-party software is required.
There's a number of ways to do this. I'm not sure about your situation, so here they are:
Option 1: Easiest: You can use a 3rd party "cloud gateway" provider, like http://storagemadeeasy.com/CloudDav/
Option 2: Set up your own "cloud gateway" server
Set up a dedicated server or virtual server to act as a gateway. Using Amazon's own EC2 would be a good choice.
Set up software that mounts S3 as a drive. Two I know of on Windows: (1) CloudBerry Drive http://www.cloudberrylab.com/ and (2) WebDrive (http://webdrive.com). For Linux, I have never done it, but you can try: https://github.com/s3fs-fuse/s3fs-fuse
Set up a webdav server like CrushFTP. (It comes to mind because it's stable and cheap and works on any OS.) Another option is IIS but I personally find it's harder to set up securely for webdav.
Set up a user in your WebDav server (ie CrushFTP or IIS) with access to the mapped S3 drive.
Possible snag: Assuming you're using Windows, to start your services automatically and have this work, you may need to set up both services to use the same Windows user account (Services->(Your Service)->[right-click]Properties->Log On tab). This is because the S3 mapping software might not map the S3 drive for all Windows users. Alternatively, you can use FireDaemon if you get stuck on this step to start the programs as a service all under the same username.
Other notes: I have experience using WebDrive under pretty heavy loads, and it seems to work well. Under tons of pounding (I'm talking thousands of files per hour being added to a 5 TB WebDrive) it started to crash Windows. But I'm not sure if you are going that far with it. Also, if you're using EC2, you may not have that issue since it was likely caused by a huge transfer queue in memory and EC2 will have faster transit to S3 and keep the queue smaller.
I finally gave up on this idea and today I use Rclone (https://rclone.org) to synchronize my files between AWS S3 and different computers. Rclone has the ability to mount remote storage on a local computer, but I don't use this feature. I simply use the copy and sync commands.
S3 does not support webdav, so you're out of luck!
Also, S3 does not support hierarchial name spaces, so you cant directly map a filesystem onto it
There is an example java project here for putting a webdav server over Amazon S3 - https://github.com/miltonio/milton-aws