Here are my two issues with CloudFront at the moment:
Some users it seems to be connecting really slow to my CloudFront CDN. It just hangs. But it's fine for others.
Some users don't even load some files (e.g: stylesheets). But like above, others are fine.
I am regularly making updates to my CDN files and use a query string to tell the users browser that the file has been updated. However I have a query string across the whole website so if I update one files, all files get an updated query string. Could that be the issue?
Has anyone else had issues like this before?
Thanks for your help!
What I have decided to do is remove the use of Query Strings and simply rename the files each time they changed.
So for example I'd call a file:
style_23623276323.css
The "3623276323" part is the MD5 signature of the file.
More details can be found in this article.
Hope that helps somebody.
Related
I am using PHPMYADMIN from SiteGround CPanel.
Story: I had Cloudflare setup for a php platform, I then realised it was causing issues so I removed it. The issue I'm left with is that half of my site is still running of (https://www.example.com).
What I have done so far: In the config files of my script I have already set it so that it runs through https alone.
What I want to achieve: I noticed in the database that there are some fields that are running through the www. I want to execute a command that will automatically find anything with my old domain (https://www.example.com) and replace it with (https://example.com). I noticed that the fields are not all appearing from a single column/file, it is all over the place, so a field&replace overall should fix the issue.
I would appecaite any help. Since it is database I don't wish to try out random things from different websites provding their feedback. I was recommended to use this website for assistance (if possible).
Thank you in advance.
Probably the most straightforward and quickest way, is to simply take a dump of the entire database, open the sql dump file in some text editor, and then do a text replace from [old url] to [new url]. Then import the dump file back to the database. This should work just fine and avoid the headache of uncertainty and risk over doing a write operation on the entire database's tables via some db query.
I have a simple S3 load with all the correct information. There are no validation errors but and the package executes without a problem. It's just that there is no data in the table. Any tips from someone that is knowledgeable about Matillion?
There are a number of reasons why Matillion might not appear to load any data in an S3 Load.
Firstly, I'd check that the pattern matches the file names in the S3 location, which is a regular expression match.
I believe that also includes the path which you may have included in the location parameter, so it may be worth modifying your pattern to look something like .*\/FilePrefix.* or even just .* and then selecting the actual file in the location parameter
Secondly, if the files were last modified more than 64 days ago, or they have already been loaded in to the table previously, Snowflake won't load them by default, which you can get around by turning the Force Load parameter On.
By default log rotate shifts file name's index on each rotation. I would like to keep names for old files. On each rotation: create new files + delete outdated.
Reason: every time I am rsycn those files with another sever, I have to download ALL file instead of simply downloading newly created ONE file and removing outdated ONE file.
Thanks
This web site and its users simply s#cks! This web site dedicated to newbie questions, which later will be replied by another group of newbies who will use google search to copy&paste reply (have no clue what they are saying) or by replying irrelevant clarification posts.
I uploaded a lot of files (about 5,800) to Amazon S3, which seemed to work perfectly well, but a few of them (about 30) had their filenames converted to lowercase.
The first time, I uploaded with Cyberduck. When I saw this problem, I deleted them all and re-uploaded with Transmit. Same result.
I see absolutely no pattern that would link the files that got their names changed, it seems very random.
Has anyone had this happen to them?
Any idea what could be going on?
Thank you!
Daniel
I let you know first that Amazon S3 object URLs are case sensitive. So when you upload file file with upper case and access that file with same URL, it was working. But after renaming objects in lower case and I hope you are trying same older URL so you may get access denied/NoSuchKey error message.
Can you try Bucket Explorer to generate the file URL for Amazon S3 object and then try to access that file?
Disclosure: I work for Bucket Explorer.
When I upload to Amazon servers, I always use Filezilla and STFP. I never had such a problem. I'd guess (and honestly, this is just a guess since I haven't used Cyberduck nor Transmit) that the utilities you're using are doing the filename changing. Try it with Filezilla and see what the result is.
I'd like to list all files from a remote folder (let's say www.mysite.com/folder, and this folder is already configured through .htaccess for directory listing).
After listing, i'll need to copy the remote files to a local folder.
For listing/copying only local files, I was using NSFileManager, but this doesn't work for the remote ones. I've been looking for some reference on it, but couldn't find so far...
While NSFileManager can in fact handle URLs, it's not going to download the apache HTML page with the directory listing and parse it to do this... you'll have to do that yourself. This sounds like a strange thing to be doing however, so you may want to explain the reasoning and we may be able to suggest better alternatives. WebDAV comes to mind.
UPDATE: Based on your comment, why not put the resources in a .zip (or similar) file and download that? Then it's a single download and you can just extract it locally. Sounds like it would save a lot of headaches and would make it much easier to do things like checksum validations on the download(s).
Maybe it's not the best way, but - instead of get directory listing - we're going to keep a list of files that should be transfered (could be a .txt or .xml).
For downloading and tracking multiple requests, we're going to use ASINetworkQueues (more details can be found on http://allseeing-i.com/ASIHTTPRequest).
Another good suggestion, given by d11wqt (thank you for your help), is compressing the files and just make one single request.