How to let Dropbox treat symbolic link AS IT IS? [closed] - dropbox

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
If I create a symlink inside Dropbox folder, pointing to another file also inside Dropbox folder (say, to maintain certain directory stucture), then Dropbox will try to de-reference the symlink and treat it as an normal file, instead of syncing it as a symlink. This could be very frustrating sometimes since I don't want to copies of the same file.
So my question is, is that a way to let Dropbox sync symbolic link just AS IT IS?

In the cloud the directory structure is not the same as in your computer. Therefore there is no way it will synchronize the symbolic link as it is.
In your computer, the link points to the absolute path to your original file (or folder). It looks like the following:
Original folder path: /Users/username/home/Documents/Dropbox/MyFolder/
Symbolic link: symlink -> /Users/username/home/Documents/Dropbox/MyFolder/
Since the cloud can't point to this same directory structure it will de-reference it and copy your files (on the cloud) all over again.
Links are great in Dropbox when it points to something outside your dropbox folder.
This way the outside files will be copied to the cloud but wont be copied in your computer.
UPDATE:
On the matter of relative symbolic links, I guess Dropbox can't sync as it is because your dropbox directory structure may be different from your colleagues.
For example:
Your structure: Dropbox/Projects/
- coolfile.txt
- SharedDirectory/
Your friend structure: Dropbox/SharedDirectory/
Relative Symbolic link inside SharedDirectory: link -> ../coolfile.txt
The link will work your structure but not on your friends.
UPDATE2: links inside Dropbox are also being used to share content from within a shared folder with someone outside that group.

I've tried doing what you are trying to do. No luck. At the moment there is no way to get dropbox to sync symlinks properly.
However, there is a feature request exactly about this. So cast your votes on the request and hope for the best.

Related

Unkown permission issues preventing wsl2 from accessing random windows files/directories [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm having permissions issues when accessing seemingly random directories/files on the windows filesystem with wsl2/ubuntu. Some directories are not accessible and I get a 'permission denied' error when I try to access them or any of the files in them. However, I have no issues accessing them from Windows itself through explorer or a non-admin powershell or command-line shell.
From the WSL side I am the owner of the files and directories and have correct permissions but I still cannot access them. I can however access these directories/files if I switch to root. I shouldn't have to though since the permissions on this directory are the same as the ones on other directores.
drwxr-xr-x me me
I've tried looking at the directory properties from the Windows side and making them more permissive ("Full-control" to each group in the properties>security menu) to all of the various groups with no success. I am the only user of this computer and the only groups that exist are...
Authenticated Users
SYSTEM
Administrators (${my-machine-name}\Administrators)
Users (${my-machine-name}\Users)
I can provide more info if needed.
Make sure that not only the directory that contains the files has rx for your WSL user but also every directory above it (Sorry, would have commented but I don't have enough rep yet).
Try creating a /etc/wsl.conf with the following:
[automount]
options="metadata,uid=1000,gid=1000,umask=022"
After creating the file:
Exit your WSL session
wsl --terminate <distro> or wsl --shutdown
Then restart and test the file/directory permissions again.
The uid and gid probably already default to those values since you mention that the files and directories on the NTFS drive are showing as owned by your user. So they can probably be left out.
The metadata option is important, as it allows WSL to map Linux permissions on to files and directories created in WSL on those NTFS drivers. But again, this isn't really your problem here either.
The umask is hopefully the long-term answer to your problem, as it will map WSL/Linux rwxr-xr-x to directories created in Windows, and rw-r–r– to files.

scp not allowing file transfer except to home directory [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I need to automate a file transfer using scp and I have created a new ssh key and sent the public key to the remote server where I'll be sending files to (# ~/.ssh).
The problem is that it won't allow me to scp the file anywhere except the home directory. If I transfer it to the home directory, it works fine, but not anywhere else.
Is there something that needs to be done here? Thanks!
If you can scp the file to your home directory, then your key is working. That is unlikely to be an issue.
The kinds of problems you might have would be:
You don't have permission to write to the destination directory
$ scp test.txt myserver:/root
scp /root/test.txt: Permission denied
In this case you need to get permission to write to the directory, or choose a different destination that you do have access to.
The destination directory doesn't exist
$ scp test.txt myserver:foo/bar/
scp foo/bar: No such file or directory
In this case, check that you're uploading to the correct path.
A destination like myserver:foo/bar/ (note: no / after the :) means a relative path to your home directory. So, it might be /home/seumasmac/foo/bar/ in this case.
A destination like myserver:/var/www/ (note: there is a / after the :) is an absolute path. It means the directory /var/www/ on the server.
The error that you get when you try to upload should tell you which of the above is the problem in this case.

Removal of the /var/www/icons alias from Apache config [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a directory called /var/www/icons on my web server, which is also referenced as an alias in my Apache config as seen below:
Alias /icons/ "/var/www/icons/"
The directory contains a number of small PNGs and GIFs, which AFAIK are unused, along with a README file.
Am I safe to remove this alias from my Apache config by commenting it out? If not, what area of my application is the removal of this likely to effect?
There is very little documentation available on this directory and I must admit i've never came across it up until now.
Most icons are used for displaying file types in directory listings. If you do not use such listings, you can safely remove alias + files. I did so and do not miss them.
It is for sure safe to remove it. Other conf files could reference /icons (e. g. the autoindex module) but apart from some not found errors nothing nasty should happen.
My advice: scan the access.log files to see if urls rooted at /icons are accessed. Delete the alias and monitor the error.log file for 404 errors.

Directory Listing in S3 Static Website

I have set up an S3 bucket to host static files.
When using the website endpoint (http://.s3-website-us-east-1.amazonaws.com/): it forces me to set an index file. When the file isn't found, it throws an error instead of listing directory contents.
When using the s3 endpoint (.s3.amazonaws.com): I get an XML listing of the files, but I need an HTML listing that users can click the link to the file.
I have tried setting the permissions of all files and the bucket itself to "List" for "Everyone" in the AWS Console, but still no luck.
I have also tried some of the javascript alternatives, but they either don't work under the website url (that redirects to the index file) or just don't work at all. As a last resort, a collapsible javascript listing would be better than nothing, but I haven't found a good one.
Is this possible? If so, do I need to change permissions, ACL or something else?
I've created a simple bit of JS that creates a directory index in HTML style that you are looking for: https://github.com/rgrp/s3-bucket-listing
The README has specific instructions for handling Amazon S3 "website" buckets: https://github.com/rgrp/s3-bucket-listing#website-buckets
You can see a live example of the script in action on this s3 bucket (in website mode): http://data.openspending.org/
There is also this solution: https://github.com/caussourd/aws-s3-bucket-listing
Similar to https://github.com/rgrp/s3-bucket-listing but I couldn't make it work with Internet Explorer. So https://github.com/caussourd/aws-s3-bucket-listing works with IE and also add the possibility to order the files by names, size and date. On the downside, it doesn't follow folders: only the files at one level are displayed.
This might solve your problem. Security settings for Everyone group:
(you need the bucketexplorer.com software for this)
If you are sharing files of HTTP, you may or may not want people to be able to list the contents of a bucket (folder.) If you want the bucket contents to be listed when someone enters the bucket name (http://s3.amazonaws.com/bucket_name/), then edit the Access Control List and give the Everyone group the access level of Read (and do likewise with the contents of the bucket.) If you don’t want the bucket contents list-able but do want to share the file within it, disable Read access for the Everyone group for the bucket itself, and then enable Read access for the individual files within the bucket.
I created a much simpler solution. Just place the index.html file in root of your folder and it will do the job. No configuration required. https://github.com/prabhatsharma/s3-directorylisting
I had a similar problem and created a JavaScript-and-iframe solution that works pretty well for listing directories in S3 website files. You just have to drop a couple of .html files into the directory you want to list. You can find it here:
https://github.com/adam-p/s3-file-list-page
I found s3browser, which allowed me to set up a directory on the main web site that allowed browsing of the s3 bucket. It worked very well and was very easy to set up.
Using another approach base in pure JavaScript and AWS SDK JavaScript API. Not need PHP or other engine just pure web site (Apache or even IIS).
https://github.com/juvs/s3-bucket-browser
Not intent for deploy on your own bucket (for me, no make sense).
Using the new IAM Users from AWS you can provide more specific and secure access to your buckets. No need to publish your bucket to website and make all public.
If you want secure the access, you can use the conventional methods to authenticate users for your current web site.
Hope this help too!

How to enable mbstring from php.ini? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have real difficulties with enabling mbstring extension on my localhost.
I'm using XAMPP 1.7.4, for Windows, which has PHP 5.3.5, and tried to edit my php.ini file according to the documentation and various other examples I found online. After about 6 hours of this, all I managed to do is get a "Error 500 - Server error' message, that didn't go away even after I rolled-back all changes to the .ini file.
What I need to do, is create PDF invoices with Danish characters, using tFPDF, to support UTF-8 encoding.
If anybody here knows some tips, suggestions, or an example of a working php.ini setup, please help out, 'cause I'm starting to lose my hair over this one! :|
Thanks a lot!
All XAMPP packages come with Multibyte String (php_mbstring.dll) extension installed.
If you have accidentally removed DLL file from php/ext folder, just add it back (get the copy from XAMPP zip archive - its downloadable).
If you have deleted the accompanying INI configuration line from php.ini file, add it back as well:
extension=php_mbstring.dll
Also, ensure to restart your webserver (Apache) using XAMPP control panel.
Additional Info on Enabling PHP Extensions
install extension (e.g. put php_mbstring.dll into /XAMPP/php/ext directory)
in php.ini, ensure extension directory specified (e.g. extension_dir = "ext")
ensure correct build of DLL file (e.g. 32bit thread-safe VC9 only works with DLL files built using exact same tools and configuration: 32bit thread-safe VC9)
ensure PHP API versions match (If not, once you restart the webserver you will receive related error.)