I am using guacamole to connect to remote devices over RDP for Windows machines and SSH for Linux. Now I would like to enable SFTP support for the connections so I enabled the option 'Enable SFTP' in the guacamole connection settings.
The problem is SFTP is working for smaller files (<3KB), creates 0KB files for slightly larger files (3KB-150KB) and raises internal error for larger files (>150KB). I checked for what file size SFTP is failing by trial, transferring files of different sizes to the remote machine.
In the screenshot, it can be seen that 'attendance.py' a smaller file of size 548 bytes is successfully transferred to the tmp folder in the Linux machine, but the other two files files are created as empty files. The pdf file I tried to move is close to 180KB, which raises a Internal Error. I checked if there is some dependency with this error and filetype but this problem occurs for all file formats. I have the same problem when transferring file to a windows machine configured with RDP protocol in the same guacamole server.
Can someone help me with this? Thanks in advance
Are you using a reverse Proxy?
I had the same problem by using nginx. It seems it is by default not allowing files greater than 1MB.
I could change that at nginx to any size and now it works.
For nginx look for: client_max_body_size
If you are not using nginx, i would take a look at the webserver config. Remember, you using some sort of a webserver and a filelimit is there usualy very much needed.
Related
I utilize the Apache VFS library to access files on a remote server. Some files are symbolic links and when we get the file size of these files, it comes back as 80 bytes. I need to get the actual file size. Any ideas on how to accomplish this?
Using commons-vfs2 version 2.1.
OS is Linux/Unix.
You did not say which protocol/provider you are using. However it most likely also does not matter: none of them implement symlink chasing as far as I know (besides local). You only get the size reported by the server for the actual directory entry.
VFS is a rather high level abstraction, if you want to commandeer a protocol client more specially, using commons-net or httpclient or whatever protocol you want to use gives you much more options.
I have an Apache server running on Ubuntu hosting some files available for download. The files hosted is a mounted nas drive.
I am finding that when I try downloading, via the web server, large zip files (.zip, .7z) of 100MB+ the transferred file is corrupted. The method I am using to check the files is performing a MD5 calculation. I am also finding that the file size correlates with the chance of corruption; bigger file, high chance of corruption. The mount seems to be fine, because I transferred files from NAS to the machine without any issues.
I also have IIS running on windows hosting the same files. When I download the files via this web server there is never a corruption. This makes me think that the network itself is fine.
I am downloading the files via Chrome.
I'm not sure what is wrong but I am lead to believe it has to do with some configuration with Apache. How can I increase my file transfer reliability on Apache? Or is there another possible cause of issue?
It was an Apache configuration issue.
Found the solution in this article
Adding EnableSendfile On to the apache2.conf file fixed the corruption issue with large zip files. Apache 2.4 has this configuration default off while Apache 2.2 default its to on.
I have an Angular2 app that I've been developing for a bit now. Locally I run an Nginx server but the deployment server is using Apache. To unify things I worked to move the deployment server to Nginx but I am getting extremely slow results with Nginx.
Apache loads in ~5 seconds (1.1MB transferred)
Nginx loads in 16-20 seconds (5MB transferred)
These are both on the same server pointing to the exact same directory. The actual size of main.bundle.js is 4470365 main.bundle.js so it seems Nginx is loading the entire file.
How is Apache able to download only 737K?
You can check for the features enabled in both the files with nginx and apache by clicking on the exact file in Inspect element Network Tab. Then go to Headers and then Response Headers as illustrated in the attached image.
Check if the gzip compression is enabled in any one of the server. That is the only reason for lesser file size.
Our site is hosted in Amazon EC2 server and files are stored in Amazon S3 server.
My problem is when I upload files larger than 30MB, uploading failed. Does not display any error, but connection reset. the page display like this
The connection was reset
The connection to the server was reset while the page was loading.
The site could be temporarily unavailable or too busy. Try again in a few moments.
If you are unable to load any pages, check your computer's network connection.
If your computer or network is protected by a firewall or proxy, make sure
that Firefox is permitted to access the Web.
But if I upload smaller than 30 MB there is no issue or error. Please help me in this issue.
In my config file we are using
<httpRuntime executionTimeout="120000" maxRequestLength="1048576"
useFullyQualifiedRedirectUrl="false"/>
so that, it should allow around 1GB file.
We are using single part uploading, in Amazon document says that single part uploading also allow 1byte to 5 Gb, then what is the issue?
Please help me soon
I have got dedicated server and file about 4 GB to upload on the server. What is the fastest and most save way to upload that file to the server?
FTP may create issues if the connection will be broken.
SFTP will have the same issue as well.
Do you have your own computer available through internet public IP as well?
In that case you may try to set up a simple HTTP server (if you have Windows - just set up the IIS) and then use some download manager on dedicated server (depends from OS) to download the file through HTTP (it can use multiple streams for that) or do this through torrent.
There're trackers, like http://openbittorrent.com/, which will allow you to keep the file on your computer and then use some torrent client to upload the file to the dedicated server.
I'm not sure what OS your remote server is running but I would use wget it has a --continue from the man page:
--continue
Continue getting a partially-downloaded file. This is useful when
you want to finish up a download started by a previous instance of
Wget, or by another program. For instance:
wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
If there is a file named ls-lR.Z in the current directory, Wget
will assume that it is the first portion of the remote file, and
will ask the server to continue the retrieval from an offset equal
to the length of the local file.
wget binaries are available for GNU/Linux / Windows / MacOSX / dos:
http://wget.addictivecode.org/FrequentlyAskedQuestions?action=show&redirect=Faq#download