I building a new course-like web application. There will be plenty of images, video and sound files.
I am wondering about possible strategies for static file management between app environments.
My current approach is to use SQL database to store image urls, which will be uploaded via admin panel on the website. The images are to be stored in a blob-like storage (AWS S3 bucket).
This however, when doing changes, requires to upload the image to each environment or create a data migration dev -> staging -> prod in a deployment pipeline.
Am I missing something here? Even if I store files in a single place (single storage account) for all environments, I still need to migrate the database records when making changes to the course.
Should I just apply the changes in prod and create some basic migration data for dev/uat course testing?.
To emphasize, files will only be uploaded by an admin, not by a user. For example, admin uploads the image via admin panel and the image will be automatically included in the course.
I am not sure what's the appropriate way of doing this to manage and test changes properly. If I allow to do this on prod directly without migration, then I'm running the risk of uploading something invalid into the course with untested changes. On the other hand , I am not sure if it's common to migrate SQL data between databases and it will also have it's own pitfalls.
Related
Recently, I have been trying to deploy an interactive Google App Engine that writes to a SQLite database, which works fine when running the app locally, but when running it through the server, I receive the error:
OperationalError: attempt to write a readonly database
I tried changing the permissions on my .db, .sql but no luck.
Any advice would be greatly appreciated.
You can try changing permission of the directory and checking that .sqllite file exists and is writable
But generally speaking is not a good idea to rely on disk data when working on app engine as disk storage is ephemeral (unless you are using persistent disks on flex) but even then its better to use a cloud database solution
App Engine has exactly read-only file system, i.e. no files can be modified. It has, however, /tmp/ folder to store temporary files as the name suggests. It actually uses RAM, so not a good idea if the database is huge.
On app startup you can copy your original database file to /tmp/ folder and use it from there afterwards.
This works. However, all the changes in the database are lost when the app nodes scale to 0. Each node of the app has its own database copy and the data is not shared between the nodes. If you need the data to be shared between the app nodes, better use CloudSQL.
Currently I’m using a shared storage(azure file storage) to store profile pictures and company logos and also some custom python scripts uploaded by admins. My rest services are running in a docker swarm cluster where all the nodes have access to the shared location. Are there any drawbacks to this kind of design? I’m currently saving the files to the location and creating a url for that file and serving it as a static resource using my nginx reverse proxy/load balancer. So I was curious to know if there are any drawbacks to this design and how can I make it better?
There are several ways to access, store, and manipulate files in Azure file storage using REST API:
The Azure File service offers the following four resources: the storage account, shares, directories, and files. Shares provide a way to organize sets of files and also can be mounted as an SMB file share that is hosted in the cloud.
More info here
When it comes to the design, it will depend of what kind of concerns your customers may have, slow connectivity, are they going to need these files permanently etc ...
I am developing a web application that needs to store uploaded files - images, pdfs, etc. I need this to be secure and to scale - I don't have a finite number of uploads to plan for. From my research, the best practice seems to be storing files in the private file system, storing paths and meta data in the database, and serving through an authenticated script.
My question is where should these files be stored?
I can't store them on the web servers because I have more than 1, would be worried about disk space, and don't want the performance hit from replication.
Should they be programmatically uploaded to a CDN? Should I spin up a file server/cluster to handle this?
Is there a standard way for securely storing/retrieving a large number of files for web applications?
"My question is where should these files be stored?"
I would suggest using a dedicated storage server or cloud service such as Amazon AWS. It is secure and completely scalable. That is how it is usually done these days.
"Should they be programmatically uploaded to a CDN?" - yes, along with a matching db entry of some sort for retrieval.
"Should I spin up a file server/cluster to handle this?" - you could. I would suggest one of the many cloud storage services though.
"Is there a standard way for securely storing/retrieving a large number of files for web applications?" Yes. Uploading files and info via web or mobile app (.php, rails, .net, etc) where the app uploads to storage area (not located in public directory) and then inserts file info into a database.
I've got a DotNetNuke system (v 5.6) that's hosting several different portals, and I'd like to move one of them to another hosting provider. What's the easiest way to do this?
Every web site I find that claims to explain how to move a DotNetNuke site essentially says "Copy the entire database over to the new system." That's great if you've only got one portal in the database, but I've got a dozen of them. I only want to move one portal, not all of them.
Exporting the site to a .template is another popular suggestion. This exports the structure of the site (all the tab definitions, for example), but it doesn't include any of the actual HTML content. As such, that's essentially worthless.
There must be a reasonable way to do this short of trying to strip one individual portals data out of every single DNN table. Right?
When you export a site template, you can include the content of the site, as well (for the modules that support portability, which includes the standard HTML module). This is how the default site template has all of its content. When you do this, there will be a .template.resources file that you'll need, as well as the .template file.
The other option is to do a full backup and restore, and then remove the other sites once you've restored. If you have significant content in a module that doesn't support portability, I think this will be your best bet.
FYI, I did find a solution from someone over on the DotNetNuke forums.
Create a 2nd version of that install, then delete all the other
portals. Move the install with the one portal. We've done this several
times with installs with lots of portals and it works just fine. Yeah
there's still some noise left in the db, but it's a quick and
effective way of doing things.
Edit note that this will give you an install with 1 portal. You can't detach a portal from one install and reattach it to an existing
install (well, you can, but basically you have to export the portal as
a template and that isn't 100%)
This is the approach I took, and sure enough, it works.
In a nutshell:
Mirror the files for the web site to another server.
Mirror the DNN database to another server.
Log in a Host on the new setup and delete all the portals but the one you want to migrate.
Delete any module definitions that are not in use by the remaining portal.
Open up your favorite SQL tool and delete any entries in the Users and UserProfile tables that no longer have a matching row in the UserPortals table. DNN does not remove these by default, which is frustrating.
Hop in to Windows Explorer and delete all of the Portal folders you no longer need (ie: /Portal/1, /Portal/2, etc.)
Back up the database using Enterprise Manager to create a .bak file
Make a .zip of the entire DNN installation folder.
You now have a .bak that contains the database and a .zip that contains the files. Send those off to the new hosting company, and you should be all set. Just make sure to update your web.config to set the connection string properly to point to the new database server at the new hosting company.
It's just that easy. ;)
I have tried looking for an answer to this but I think I am perhaps using the wrong terminology so I figure I will give this a shot.
I have a Rails app where a company can have an account with multiple users each with various permissions etc. Part of the system will be the ability to upload files and I am looking at S3 for storage. What I want is the ability to say that users from Company A can only download the files associated with that company?
I get the impression I can't unless I restrict the downloads to my deployment servers IP range (which will be Heroku) and then feed the files through a controller and a send_file() call. This would work but then I am reading data from S3 to Heroku then back to the user vs. direct from S3 to the user.
If I went with the send_file method can I close off my S3 server to the outside world and have my Heroku app send the file direct?
A less secure idea I had was to create a unique slug for each file and store it under that name to prevent random guessing of files i.e. http://mys3server/W4YIU5YIU6YIBKKD.jpg etc. This would be quick and dirty but not 100% secure.
Amazon S3 Buckets support policies for granting or denying access based on different conditions. You could probably use those to protect your files from different user groups. Have a look at the policy documentation to get an idea what is possible. After that you can switch over to the AWS policy generator to generate a valid policy depending on your needs.