I am using BackupRead and BackupWrite to implement a file synchronization between 2 folders.
I can use them on files and folders without any issue. But when I want to use them on a reparse point, BackupWrite fails with an access denied error.
The original reparse point is retrieved without error with BackupRead. The buffer has 2 streams: one for the security data, and one for the reparse data. I can see in the latter the target of the reparse point.
The file I am trying to create does not exist and FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OPEN_REPARSE_POINT is passed to CreateFile.
So, my question is: what is the way to open a file so that BackupWrite succeeds to restore a reparse point?
I finally found 2 issues:
If the reparse point is for a directory, the new reparse point must be first created as a directory.
We need to enable "SeRestorePrivilege" or "SeCreateSymbolicLinkPrivilege" for the current process (through OpenProcessToken, LookupPrivilegeValue and AdjustTokenPrivileges), even if the administrative privileges have been granted.
Related
I made sure every single thing is correct with the related file names and their content addresses and so on yet I always get stuck upon executing the !sudo line command which should open and use a given file, it returns this error :
"Error: Cannot read file '/content/-p': No such file or directory"
As I have already said, the file does exist and is located in my Google Drive, I even copy pasted its specific path to make sure I put it right and yet the issue is still there, why? how can I solve it?
Thanks in advance for any help.
Two scenarios in my experience:
(1) It's the first time running the lines of code after mounting your google drive and for no apparent reason it runs for tens of minutes, produces not output, raises an error (e.g. "File does not exist"), and the session crashes; you repeat the exact same steps, then it works.
(2) There are thousands (or more) files within the folder that contains the file you're trying to read or write; when a google drive folder contains many files (i.e. thousands or more) it may crash for that reason.
I have a lambda that is sourced to fire whenever a new csv file is added to an s3 bucket. It parses the csv file into the individual rows of the csv and puts them into an sqs queue to be processed further.
The problem is that even though the lambda has the appropriate permissions (s3:GetObject for arn:aws:s3:::my-bucket-name/*), it always fails with a access denied error when trying to execute the GetObject function.
Any idea why this is happening?
The issue was that file name as received by the lambda was encoded wrong, causing the lambda to look for a non-existent file.
AWS treats looking for a non-existent file the same as trying to access a restricted resource which is why I was receiving the somewhat misleading Access Denied error.
To fix, I changed the file naming schema to a simpler one unaffected by the encoding. This fixed it.
I have been given a program that uploades pdf files to an ftp server, which is something I never did. I've been asked what the behavior regarding attempting to upload a duplicate filename is. It apparently doesnt check for duplicate filenames manually, but the comand that uploads the file is My.Computer.Network.UploadFile and I can't find what happens when attempting to upload a duplicate file anywhere, does it throw an exception or overwrites the file?
It looks like My.Computer.Network.UploadFile is a wrapper around WebClient.UploadFile, and the documentation for that states:
This method uses the STOR command to upload an FTP resource.
In the FTP RFC 959 it says (I highlighted the relevant part):
STORE (STOR)
This command causes the server-DTP to accept the data
transferred via the data connection and to store the data as
a file at the server site. If the file specified in the
pathname exists at the server site, then its contents shall
be replaced by the data being transferred. A new file is
created at the server site if the file specified in the
pathname does not already exist.
So, if everything is following standards (and that part of RFC 959 hasn't been replaced, I didn't dig further!), then it should replace the existing file. However, it is possible for the server to deny overwriting of existing files, so the behavior is not guaranteed.
Of course, the best thing to do would be to try it out in your environment and see what it does.
We have a need to refactor a code base. The thing is that this will be done by one person and it would be desirable to avoid having the rest of the development team sitting idle while this job takes place.
We therefore tried the following scenario to see if it is possible to work in parallel.
Created file test.txt in directory first in developer A's workspace.
Promoted this file.
Updated developer B's workspace, thereby getting file test.txt
In A's workspace moved file test.txt to directory second.
Promoted this move.
In B's workspace edited file test.txt while it still resides in directory first (no update is made thereby emulating that work is done while refactoring is taking place).
Tried to promote and got a message saying that file test.txt had been modified (correct, file has been moved).
Tried to merge but got an error message saying that AccuRev can't merge since the file is missing in directory second (where it has been moved).
Tried to update B's workspace but that is not allowed since there is a modified file that needs to be merged first.
We are now stuck in a catch 22 situation.
We did try to place a fake file in directory second but that is not being recognized since this file does not belong to the workspace.
Has anyone out there tried something like this and gotten it to work?
It is of course possible to copy files but if there is a better way we would be grateful to hear about this. Or if this is a known bug or limitation in the tool.
We will contact also contact AccuRev support but I thought that I might be able to get some useful tips from the community.
Currently we are using AccuRev client 5.5.0.
Thanks for any suggestions on how to make the tool support this operation.
Referring to your steps 6 & 7: In AccuRev 5.5 after a file is edited and has a (modified) status you first have to keep before you can promote.
At step 8 you could try doing the merge from the Browse Versions view of the file. That way you can select any node to merge with, including the one that has been moved.
Step 9. An AccuRev update will not run successfully if one of the files to be updated is (modified). This is by design. You can keep the file so it has (kept)(member) status then run the update.
David Howland
After contact with AccuRev support the answer is that the only option available is to copy the file to some temp directory, revert the changes, update the workspace and copy the file into the new location in the workspace.
AccuRev will at least tell you which files you have to copy since they will be marked as modified.
I could experimentally verify David's remark to step 9 using AccuRev 5.5.
Let's assume that in the workspace of user A the file was moved and the move was promoted, while in the workspace of user B the file was modified and user B is about to promote his/her change.
Before the file is kept, it will not be possible for user B neither to merge nor to update. But after keeping the modified file the update is possible. The file is first marked as overlap, then the merge succeeds in the new location. Basically, this avoids creating a copy of the file, reverting it and restoring it in the new location after an update, which can be quite cumbersome, as AccuRev does not reveal easily where the move goes.
If user B promotes the modification before user A promotes the move, all goes smoothly, i.e. on update the moved file appears as overlap, but easily merges into the moved file in the new location.
Similar results are obtained when the two users have workspaces connected to different streams and the overlap occurs on a common parent stream. Only if the file is unkept, an error can occur (i.e. only if the move is promoted before the change). Then a simple keep allows to proceed as usual (update, merge, then promote).
Imagine there are 3 or more independent locations where a file can be modified. These locations communicate to each other through email or mail (direct flash drive restoration). Though there is a big room for flow - to make simultaneous editing to the file and screw up things, this client won't change too much. He rather call everyone that he is working on the last update or tell the other guys that he is waiting for third guy's last update. Anyway, at some point after several exchanges, due to one of participants unintentional error THE LAST VERSION of the file eventually gets mixed up. From this point everyone searches for the last version BY LOOKING THE CONTENT of the file.
This client wants to have a central location (he has actually, that is his PC's some location) and let everybody (including himself) copy any new or suspected new file to this location but prevent file's last version being copied. From this location he has to easily copy, send or open the file and work.
So, here is my concept (2 steps):
step 1: I made an ad to the main application where this file is created or edited. This ad prompts the user to give a version number to the file with every invoked save command from the editing application. In fact the file can be re-saved multiple times but not considered modified (file attributes creation, save etc. do not have great meaning here). This said the user can cancel my ad-in but have saved the file, not saving a new file version.
step 2: multiple solutions:
solution A: I'm thinking to have a folder/file watch and prevent the last version of the file being overwritten. As you know, FileSystemWatcher will fire the change/delete etc., events AFTER FACT so, I have to back copy overwritten file after the fact (w/ some tricks).
solution B: have a database to store all version of files and built-in some shell extension to extract/view files from the database. Move all copied/pasted files to the database (my program folder) and restore latest file in working folder after watcher fires change/delete event.
solution 3: find out built-in windows tools (API etc.) to greatly rely on it with some programming.
Any ideas?
Thanks in advance.