I have hundreds of images on my server (Linux-based shared hosting) already. I want to rename these files at once using batch processing. The names are some gibberish, and what I want them named as is "mysitecom_image_001.jpg", "mysitecom_image_002.jpg", etc.
Can this be done in cPanel or FTP?
Related
During the initial migration to AWS CloudWatch logging I also want legacy log files to be synced. However, it seems that only the current active file (i.e. still being updated) will be synced. The old files even match the file name format will be ignore.
So are there any easy way to upload legacy files?
Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html
Short answer: you should be able to upload all files by merging them. Or create a new [logstream] section for each file.
Log files in /var/log are usually archived periodically, for instance by logrotate. If the current active file is named abcd.log, then after a few days files will be created automatically with names like abcd.log.1, abcd.log.2...
Depending on your exact system and configuration, they can also be compressed automatically (abcd.log.1.gz,abcd.log.1.gz, ...).
The CloudWatch Logs documentation defines the file configuration parameter as such:
file
Specifies log files that you want to push to CloudWatch Logs. File can point to a specific file or multiple files (using wildcards such as /var/log/system.log*). Only the latest file is pushed to CloudWatch Logs based on file modification time.
Note : using a glob path with a star (*) will therefore not be sufficient to upload historical files.
Assuming that you have already configured a glob path, you could use the touch command sequentially on each of the historical files to trigger their upload. Problems :
you would need to guess when the CloudWatch agent has noticed each file before proceeding to the next
you would need to temporarily pause the current active file
zipped files are not supported, but you can decompress them manually
Alternatively you could decompress then aggregate all historical files in a single merged file. In the context of the first example, you could run cat abcd.log.* > abcd.log.merged. This newly created file would be detected by the CloudWatch agent (matches the glob pattern) which would consider it as the active file. Problem : the previous active file could be updated simultaneously and take the lead before CloudWatch notices your merged file. If this is a concern, you could simply create a new [logstream] config section dedication the historical file.
Alternatively, just decompress the historical files then create a new [logstream] config section for each.
Please correct any bad assumptions that I made about your system.
Currently I have a bunch of local copies of dev/production websites. Each copy contains the "files" directory, which contains files uploaded by site users. Currently I use rsync to synchronize the directories contents from remote servers (via ssh).
There are some annoyances:
I have to run rsync manually each time when I want fresh files (this could be automated of course, but as I have a lot of website copies, it's not a good idea).
The rsync execution takes some time.
Disc space on my laptop is running out.
I think all of this could be solved if there is some kind of a software that can work like a proxy:
When I list files, it requests the file list from the remote server and caches the results for some (configurable) time.
When I first time request file contents, it retrieves the remote file and saves it locally.
When I update a file, it only gets updated locally.
When I save a new file in the "files" directory, it not goes to the remote server.
Of course, the logic of such software should be much more complex, but I hope, my idea is clear: don't waste disk space, download files on demand, no remote changes.
Is there any software that works like that?
Map a network drive with NFS or sshfs. Make local copies if you really need a file.
I did not mention it in the question, but I needed this for work with Drupal. And now I have found a Drupal-only solution, the Stage File Proxy module.
It does exactly what I need: downloads files from a remote server only when they are requested.
We have to upload a lot of virtual box images witch are between 1G and 6G.
So i would prefer to use ftp for upload and then include the files in mediawiki.
Is there a way to do this?
Currently I use a jailed ftp user who can upload to a folder and then use the UploadLocal extension to include the files.
But this works only for files smaller then around 1G. If we upload bigger files we get a timeout and even by setting execution_time of PHP to 3000s the including stops after about 60s with a 505 gateway time out (witch is also the only thing appearing in the logs).
So is there a better way of doing this?
You can import files from shell using maintenance/importImages.php. Alternatively, upload by URL by flipping $wgAllowCopyUploads, $wgAllowAsyncCopyUploads and friends (requires that job queue be run using cronjobs). Alternatively, decide if you need to upload these files into MediaWiki at all, because just linking to them might suffice.
Say I have a user, and that user has an XML file which, among other things, includes the relative (to the XML file) path to one or more images stored on their local machine. I want them to be able to upload this XML file to a web server, and automatically upload the images.
So my XML file might contain:
<tag>Images\img_20120905_015463548.jpg</tag>
and I want to upload both the XML file and img_20120905_015463548.jpg in one operation.
The problem is, as best I can tell, I can't get a local web page to grab the images automatically using JS/jQuery due to the pesky web browser security model that won't allow me to upload arbitrary files off the local computer, or even know the real path of the XML file. After bashing my head against a brick wall for a few hours, I've come up with two possible solutions:
Upload the XML file, the server strips out the image file addresses and asks the user to locate each one. While it would get the job done, it's ugly and error-prone.
Use a batch file (or similar) to copy the XML file and images to a public-facing web server that the user can access on the local network, and then supply the public address of the XML file to my web server. It can then grab the images off the local public server. Problem: my IT department are too competent to allow users file access to public-facing servers. :)
Is there any solution out there I might have missed, that allows the user to upload multiple files given filenames only specified as a relative path?
Thanks in advance. :)
If you are not restricted to a web-only solution, this would be achievable using a plugin or desktop application. For instance, a desktop .NET or Java WebStart application or a signed and therefore trusted Java applet would be able to access the local XML file and any associated image files, then upload them to the web server using a POST, web services or WebDAV.
I have an issue where occasionally I need to work at Starbucks.
When I upload a PHP file the connection is slow so if a user tries to access the PHP file while I am uploading it they will of course be issues a fatal error.
This is very inconvenient to my busy websites. Is there a way that when a file is uploaded it can be uploaded to a temporary location, and then the server moves it to the real location once finished?
You can make WinSCP upload the file to temporary file and rename it once transfer completes automatically.
In Preferences go to the Transfer > Endurance tab and select All Files in the Enable ... Transfer to temporary file name box.
For details refer to:
https://winscp.net/eng/docs/ui_pref_resume
Why don't you just upload the file to a temporary folder on the server and execute commands on the server to remove the old file and move the new file? It should move the file fast enough on the server to eliminate any hiccups the users would see unless their timing was just right.