How do you specify if storage should be local or session when using VueUse? It doesn't seem to mention it in the docs https://vueuse.org/core/usestorage/
The third argument to useStorage() specifies the storage to use. It uses window.localStorage by default.
To use session storage, pass window.sessionStorage as the storage argument:
useStorage(
'my-storage-key',
myInitialValue,
window.sessionStorage, 👈
)
Related
I'm using the startCopy API from azure-storage java sdk version 8.6.5 to copy blobs between containers within the same storage account. As per the docs, it will copy a block blob's contents, properties, and metadata to a new block blob. Does this also mean source and destination access tier will match ?
String copyJobId = cloudBlockBlob.startCopy(sourceBlob);
If the source blob access tier is ARCHIVE, I am getting the following exception -
com.microsoft.azure.storage.StorageException: This operation is not permitted on an archived blob.
at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:87) ~[azure-storage-8.6.5.jar:?]
at com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:305) ~[azure-storage-8.6.5.jar:?]
at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:196) ~[azure-storage-8.6.5.jar:?]
at com.microsoft.azure.storage.blob.CloudBlob.startCopy(CloudBlob.java:791) ~[azure-storage-8.6.5.jar:?]
at com.microsoft.azure.storage.blob.CloudBlockBlob.startCopy(CloudBlockBlob.java:302) ~[azure-storage-8.6.5.jar:?]
at com.microsoft.azure.storage.blob.CloudBlockBlob.startCopy(CloudBlockBlob.java:180) ~[azure-storage-8.6.5.jar:?]
As shown below, I used startCopy API to copy all blobs in container02(src) to container03(destination). Blob with access tier ARCHIVE failed and also test1.txt blob's access tier is not same as in source.
I just want to confirm if this is expected or I am not using the right API and need to set these properties explicitly if I need both source and destination to look same??
Thanks in Advance!!!
1. Blob with access tier ARCHIVE failed
You cannot execute the startCopy operation when the access tier is ARCHIVE.
Please refer to this official documentation:
While a blob is in archive storage, the blob data is offline and can't be read, overwritten, or modified. To read or download a blob in archive, you must first rehydrate it to an online tier. You can't take snapshots of a blob in archive storage. However, the blob metadata remains online and available, allowing you to list the blob, its properties, metadata, and blob index tags. Setting or modifying the blob metadata while in archive is not allowed; however you may set and modify the blob index tags. For blobs in archive, the only valid operations are GetBlobProperties, GetBlobMetadata, SetBlobTags, GetBlobTags, FindBlobsByTags, ListBlobs, SetBlobTier, CopyBlob, and DeleteBlob.
2. test1.txt blob's access tier is not same as in source.
The access tier of the copied blob may be related to the default access tier.
Solution:
You may need to move the files from archive storage to the hot or cool access tier. Or you can use this API and specify standardBlobTier and rehydratePriority:
public final String startCopy(final CloudBlockBlob sourceBlob, String contentMd5, boolean syncCopy, final StandardBlobTier standardBlobTier, RehydratePriority rehydratePriority, final AccessCondition sourceAccessCondition, final AccessCondition destinationAccessCondition, BlobRequestOptions options, OperationContext opContext)
I have an application that uses Azure Storage Tables that I would like to run in an Azure Container Instance. The Container Instance environment variables (my only option for passing configuration to the application running in the container) only allow alphanumeric and underscores in the quoted string values, and a connection string has things like semicolons and equals. I thought a Key Vault would work, but then I can't pass an application ID either. I can't pass:
Connection String
URL
AppID - UUID
base64 data
The only thing I can even think of would be to encode these strings to bytes (UTF-8) and convert the bytes to a hex string, but that's a messy workaround. What is the recommended means of passingconfiguration to an Azure Container Instance?
Update 11/6: We've updated the Azure portal to be more lenient on env var input so strings with special characters like connection strings should work now. Thanks!
This is currently a constraint of the Azure portal. Please try this deployment via az cli, which should support special characters in env var values.
I am trying to upload a file to S3 and set an expire date for it using Java SDK.
This is the code i got:
Instant expiration = Instant.now().plus(3, ChronoUnit.DAYS);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setExpirationTime(Date.from(expiration));
metadata.setHeader("Expires", Date.from(expiration));
s3Client.putObject(bucketName, keyName, new FileInputStream(file), metadata);
The object has no expire data on it in the S3 console.
What can I do?
Regards,
Ido
These are two unrelated things. The expiration time shown in the console is x-amz-expiration, which is populated by the system, by lifecycle policies. It is read-only.
x-amz-expiration
Amazon S3 will return this header if an Expiration action is configured for the object as part of the bucket's lifecycle configuration. The header value includes an "expiry-date" component and a URL-encoded "rule-id" component.
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectHEAD.html
Expires is a header which, when set on an object, is returned in the response when the object is downloaded.
Expires
The date and time at which the object is no longer able to be cached. For more information, go to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21.
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html
It isn't possible to tell S3 when to expire (delete) a specific object -- this is only done as part of bucket lifecycle policies, as described in the User Guide under Object Lifecycle Management.
Following the documentation the method setExpirationTime() using for internal needs and do not define expiration time for the uploaded object
public void setExpirationTime(Date expirationTime)
For internal use only. This will not set the object's expiration
time, and is only used to set the value in the object after receiving
the value in a response from S3.
So you can’t directly set expiration date for particular object. To solve this problem you can:
Define lifecycle rule for a bucket(remove bucket with objects after number of days)
Define lifecycle rule for bucket level to remove objects with specific tag or prefix after numbers of days
To define those rules use documentation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-set-lifecycle-configuration-intro.html
I am attempting to upload a file to S3 following the examples provided in your documentation and source files. Unfortunately, I'm receiving the following errors when attempting an upload:
[Fine Uploader 5.3.2] Invalid policy document or request headers!
[Fine Uploader 5.3.2] Policy signing failed. Invalid policy document
or request headers!
I found a few posts on here with similar errors, but those solutions didn't help me.
Here is my jQuery:
<script>
$('#fine-uploader').fineUploaderS3({
request: {
endpoint: "http://mybucket.s3.amazonaws.com",
accessKey: "changeme"
},
signature: {
endpoint: "endpoint.php"
},
uploadSuccess: {
endpoint: "success.html"
},
template: 'qq-template'
});
</script>
(Please note that I changed the keys/bucket names for security sake.)
I used your endpoint-cors.php as a model and have included the portions that I modified here:
require 'assets/aws/aws-autoloader.php';
use Aws\S3\S3Client;
// These assume you have the associated AWS keys stored in
// the associated system environment variables
$clientPrivateKey = $_ENV['changeme'];
// These two keys are only needed if the delete file feature is enabled
// or if you are, for example, confirming the file size in a successEndpoint
// handler via S3's SDK, as we are doing in this example.
$serverPublicKey = $_ENV['AWS_SERVER_PUBLIC_KEY'];
$serverPrivateKey = $_ENV['AWS_SERVER_PRIVATE_KEY'];
// The following variables are used when validating the policy document
// sent by the uploader.
$expectedBucketName = $_ENV['mybucket'];
// $expectedMaxSize is the value you set the sizeLimit property of the
// validation option. We assume it is `null` here. If you are performing
// validation, then change this to match the integer value you specified
// otherwise your policy document will be invalid.
// http://docs.fineuploader.com/branch/develop/api/options.html#validation-option
$expectedMaxSize = (isset($_ENV['S3_MAX_FILE_SIZE']) ? $_ENV['S3_MAX_FILE_SIZE'] : null);
I also changed this:
// Only needed in cross-origin setups
function handleCorsRequest() {
// If you are relying on CORS, you will need to adjust the allowed domain here.
header('Access-Control-Allow-Origin: http://test.mydomain.com');
}
The POST seems to work:
POST http://test.mydomain.com/somepath/endpoint.php 200 OK
318ms
...but that's where the success ends.
I think part of the problem is that I'm not sure what to enter for "clientPrivateKey". Is that my "Secret Access Key" I set up with IAM?
And I'm definitely unclear on where I get the serverPublicKey and serverPrivateKey. Where am I generating a key-pair on the S3? I've combed through the docs, and perhaps I missed it.
Thank you in advance for your assistance!
First off, you are using endpoint-cors.php in a non-CORS environment. Communication between the browser and your endpoint appears to be same-origin, based on the URL of your signature endpoint. Switch to the endpoint.php example.
Regarding your questions about the keys, you should have create two distinct IAM users: one for client-side operations (heavily restricted) and one for server-side operations (an admin user). For each user, you'll have an access key (public) and a secret key (private). You always supply Fine Uploader with your client-side public key, and use your client-side private key to sign requests server-side. To perform other, more restricted operations (such as deleting files), you should use your server user keys.
I like to ask a newbie question. By setting API key in JavaScript, wouldn't anyone can read the source and use the key freely?
Edited >>
Extracted the jsFiddle codes,
filepicker.setKey('8PbzrhP9Tr2r6wPlSqzS');
/* Unsecured */
/*
filepicker.pick(function(fpfile){
console.log(fpfile);
});
filepicker.read(fpfile, function(contents){
console.log(contents);
})
*/
var fpfile = {'url': 'https://www.filepicker.io/api/file/KW9EJhYtS6y48Whm2S6D'};
var policy = 'eyJoYW5kbGUiOiJLVzlFSmhZdFM2eTQ4V2htMlM2RCIsImV4cGlyeSI6MTUwODE0MTUwNH0=';
var signature = '4098f262b9dba23e4766ce127353aaf4f37fde0fd726d164d944e031fd862c18';
filepicker.read(fpfile, {policy: policy, signature:signature}, function(contents){
console.log(contents);
})
filepicker.pick({policy: policy, signature:signature}, function(fpfile){
console.log(fpfile);
});
How does it prevents anyone from using the key ONLY, to upload or read/download files from my account?
Because browser-side javascript is always publically readable, your API key will be visible to others. To increase the protection of your API key beyond simple url/hostname checking, you can use the rich policy-based security API we provide: https://developers.filepicker.io/docs/security/
Java bytecode is easily reversable with trivial amount of work. If you need to have it, pull it from a website using TLS and store it in RAM or at the very least, xor it with a password