In an Apache Index file listing, there is a description field (along with Name, Last Modified and Size). What should or could populate this column of data?
More information:
On an Apache web server, I can enable a setting called "Apache Module mod_autoindex"
When this setting is enabled, if I visit a folder in a browser, and that folder does not have an index.html file, Apache will display the files and folders in that folder. The interface is pretty basic, but provides useful information about the files on a server.
File/folder information is displayed in a table with 4 columns (presumably generated by Apache). These columns are: Name, Last Modified, Size and Description.
Name, Last Modified and Size are self-explanatory. The description column however, is always empty. I was curious would could or should show up here. I had a hard time finding documentation on it.
A colleague of mine here found what I needed.
The description column on the Apache File Listing index view is populated using data you can create here: http://httpd.apache.org/docs/current/mod/mod_autoindex.html#adddescription
Edit: I'll also add that this documentation on setting file index formatting and descriptions via the .htaccess file is really helpful too: https://perishablepress.com/better-default-directory-views-with-htaccess/
Take a look at my website: https://wrcraig.com/ApacheDirectoryDescriptions. It goes beyond the default directory description, providing a spreadsheet to assist in creating detailed descriptions and exporting them in FancyIndex/AddDescription format for inclusion in .htaccess.
It also provides a menu driven BASH scripted alternative, using the FancyIndex descriptive data above (automatically adding A/V durations) to recursively populate index.html while retaining the security features of .htaccess.
The site has examples of the input spreadsheet and both the FancyIndex output and the optional BASH scripted output.
Related
I cannot find an answer to for several weeks. Perhaps my experience in development is not so great :) One site uses the JCH Optimize plugin and I noticed that after clearing the old memory (caches), the CSS and JS file links do not change, i.e. the names of these files remain old. The problem is that the browser checks the file name and if it has not changed, then the site visitors show the old version of the style file. The question itself, where in the plugin (in which code file) i can add some GET parameters ?vers = 1.1 so that for the browser it is a new file and it would update the information for users. I will be glad to hear any solutions. Thanks.
The names of the combined css and javascript files are keyed from the names of the individual files on the page. Clearing the cache will not cause the plugin to generate a different file name.
As of version 7.0.0, there is an option to generate a different cache key if you want to change the names of the combined file. Update your current version to get that capability.
I'm working on enhancing metadata in our SharePoint online (O365) environment. Since a portion of my user base is used to foldering (explorer style), I've started using default column values to automatically set values on any files added to that specific folder (we have content organized categorically by folder currently). An example is our HR documents library - we have separate folders for recruiting, payroll, personnel files, etc. that automatically categorize files added to that folder with the same categories (recruiting, payroll, personnel, etc.). This supports both "search" and "click" users and makes adoption WAY easier while getting important metadata.
I want to implement this in a larger, more dynamic fashion, so manually setting default column values on each folder is not going to be scalable.
How can I reference the top level folder within the library (or even the current folder) for each newly added file and populate the "category" field for that new file with that folder name? I can do some very basic C# or Java code copy/paste, but bonus points for non-coding solutions =)
This problem can be solved through no coding.You can use the workflow to implement this by SharePoint Designer.
Create different view for different function team, and then use the view filter to show the document.
If you upload a file, use the workflow to set the metadata of the file. There are some known limitations: if you upload multiple files at the same time, the metadata for the file maybe does not work well; or if you upload a folder, the meta will not work for it and the file in the folder may not be set to right metadata.
I was actually able to use MS Flow to accomplish this in a pretty simple and straightforward fashion without managing custom views per team. The concept at a high level was:
(Trigger) When a new document is created in a folder in the library
Get the link of the parent folder of the newly added document
Create a variable (or just code it out in the Flow step) to parse out the name of the parent folder from the parent folder link (should be all text to the right of the last "/")
Set the category field as the variable
I'm sure that you could do the same right in a SharePoint designer workflow, but I prefer flow due to the visual aspect of it and being far easier to troubleshoot.
Sorry if this seems stupid but I wonder if it's possible to add a database entry after an ftp upload.
To be more clear, thanks to winSCP I have several folders sending everything I put in there automatically to my server.
However, I would like to create a mysql entry for each uploaded files and once again, automatically. Is it possible to do that? How?
To gives the full details of what I need to do, you can read the following.
I have several folders with pictures and each folders are uploaded automatically.
Each of those folders belong to one user and the goal is to give them an account and allow them to see and download those files through a web interface. Since one account = one folder, that's kinda easy.
And I think a simple .htaccess can simply secure things so one user can only see and download the file in his own repository, no?
However if I want them to be able to see what's new (=something they didn't download or simply mark as read) I think I need a table to manage those files.
Something like id | file (string) | read (bool).
If you think this way to proceed is bad, they I'm open to change how to do things, but to be clear uploading the file need to work this way. Not using any kind of formulary.
Thanks for reading that, sorry for my english.
Your problem contains three steps:
Folders/Files been automatically uploaded to your server directory, as you say, this been efficiently handled by winSCP.
You need to update your database with all the files and folders present in your server directory.
You need to update whether or not it is been read/downloaded by the user.
Since your first step is in place, we don't need anything there. For second step, you should write a script and schedule that script to run at a fixed time interval using CRON (if using LINUX or UNIX, or WINDOWS). The script would be responsible to create a list of file(s) present in the directory, and simply insert the file(s) information that are not present in your database.
EDIT:
This edit is to describe how your script file should work. As I explained, the cron jobs would simply help you run your script file in fixed set of interval (which can be every minute, or every hour, or every day, and so on). Lets say your database table has following columns:
fileid (varchar[20])
filepath (varchar[20])
status (boolean)
Your script file should do following things:
Create a list of existing filepaths in your server directory
Run a select query, create a list of existing filepaths from database table.
Compare list1 with list2, and find the ones that doesn't exist in list2 (This would give you a list of filepath that needs to be inserted into table)
Just insert the list of file paths you got above, and set there status to be false (which means the file is not read/downloaded yet)
NOTE: Please keep in mind that I am not advising right now that how your database table should look like. It can be what you have proposed or can even differ depending on your will or requirements.
For the third step, simply keep the status of your file to be unread when creating entries in your table from the second step, and then when user click on the file link in your application whether to view or download it, send a POST request to your server updating the file status to be marked as read.
Let me know if this helps!
By default log rotate shifts file name's index on each rotation. I would like to keep names for old files. On each rotation: create new files + delete outdated.
Reason: every time I am rsycn those files with another sever, I have to download ALL file instead of simply downloading newly created ONE file and removing outdated ONE file.
Thanks
This web site and its users simply s#cks! This web site dedicated to newbie questions, which later will be replied by another group of newbies who will use google search to copy&paste reply (have no clue what they are saying) or by replying irrelevant clarification posts.
When developing sites using Dreamweaver, it creates a _mmServerScripts directory on the root of your site. We've been reading that this folder contains SQL statements that are vulnerable to attack. We would like to avoid this all together, if possible. Is this folder even necessary? Can you do anything to Dreamweaver to tell it to never create these folders?
Thanks in advance.
When you're creating dynamic pages with Dreamweaver, it creates files in the _mmServerScripts folder. Those files are used to obtain information about your database, such as table names, table columns, and column types. This information is used within the built-in server behaviors (and possibly third-party extensions) in order to generate the appropriate code to insert into your page. One such server behavior would be the Recordset server behavior. This interface allows you to select a data source, table name, and column names to include in the recordset.
If you do not use any of the dynamic data functions within Dreamweaver, it probably does not create the _mmServerScripts folder, but if you use that functionality, it will create that folder. If you want to remove the files in this folder, which is hidden within the Dreamweaver Files panel by default (show hidden files: Files panel option menu on the upper right of the panel, View -> Show Hidden Files) by selecting:
Site -> Advanced -> Remove Connection Scripts
I do not have a very extensive knowledge of the contents of the files, so I can't comment knowledgeably about what, if any, vulnerabilities exist within the files within the folder, but it would be good to include links to such discussions you've been reading about.