My understanding is that before Jets3t 0.7.4 S3 endpoint was set statically at S3Service::setS3EndpointHost. So there was no way to use Jets3t to Get/Put content to S3 using different S3 endpoints in same application.
In Jets3t 0.7.4 release notes it's written -
"Deprecated static methods in S3Service for generating signed URLs. The new non-static method equivalents should be used from now on to avoid dependency on a VM-wide S3 endpoint constant."
Is it possible now to change S3 endpoints dynamically ? If yes how to do it.. is there a setS3Endpoint method available ?
You can set it like this:
private void setS3Endpoint(final String endpoint) {
final Jets3tProperties props = Jets3tProperties.getInstance(Constants.JETS3T_PROPERTIES_FILENAME);
props.setProperty("s3service.s3-endpoint", endpoint);
}
There is no such method in the jets3t api. The endpoint is set in the Jets3t.properties file You could (theoretically) pull in the Jets3t.properties file and change it with a helper class in Java, then create a new s3Service object that hopefully has the new config.
Related
I'm using AWS Amplify library in an IOS project and I can't find a way to pass metadata when using the Storage.uploadData method. The only two parameters that uploadData seems to accept are "key" and "data".
Is there a way to add metadata when using Amplify.Storage.uploadData?
Here is the API Reference...
https://aws-amplify.github.io/amplify-ios/docs/Protocols.html#/s:7Amplify26StorageUploadDataOperationP
I am building an HTTP API using sam local start-api. Each endpoint of this API is mapped to a lambda handler I have written in Javascript code.
One of these lambda handlers requires to download and upload files from S3, for which I am using this S3Client from #aws-sdk/client-s3. I have tried to initialize such client as follows:
const s3Client = new S3Client({
region: "eu-west-1"
});
expecting that it reads the credentials from my ~/.aws/credentials file, but it does not. All operations via this client fail due to lack of permissions.
I would like to know what is the correct way of using this S3Client from within a lambda handler that I am testing locally using sam local.
If you're not using the default profile in your AWS credentials file, Sam CLI commands have a --profile option to specify which profile to use.
For example:
sam local start-api --profile my_profile
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-start-api.html
I have some code using AWSSDK.S3 to upload data to S3, no mystery.
Since IBM claims it's Cloud Object Storage to be S3 compatible, it would be possible to use AWSSDK.S3 to upload files to IBM COS by only changing the ServiceURL at appsettings.json?
Does anybody did that before?
I'm not sure about appsettings.json but yes, if you set the ServiceURL to the config used to create a client it should work transparently. Obviously any AWS features that aren't supported by COS wouldn't work, and any COS extensions (like API key auth, or Key Protect, etc) wouldn't be available.
Something like:
AmazonS3Config S3Config = new AmazonS3Config {ServiceURL = "https://s3.us.cloud-object-storage.appdomain.cloud"};
string accessKeyId = "<accesskey>";
string secretAccessKey = "<secretkey>";
BasicAWSCredentials credentials = new BasicAWSCredentials(accessKeyId, secretAccessKey);
AmazonS3Client client = new AmazonS3Client(credentials, S3Config);
I was able to use the AWSSDK.S3 on a Net Core 3.1.17 backend.
My aim was to use the IBM COS (Cloud Object Storage) service: read, write and delete files from it.
The usage of AWSDK.S3 is due to the fact that right now there is no nuget package from IBM or from others which can help us (developers) on this, so there are 2 ways:
implement all these features (read, write, delete) manually via REST API of IBM COS service (which should be S3 compliant)
try to use AWSSDK.S3 package S3 compliant
Thanks to the previous answer I did a little bit of research and refinement and these are the steps in order to make it working even with the Dependency Injection of Microsoft.
In the IBM Cloud platform, create Service credentials including HMAC Credentials. This is an important step which allows you to create credentials with an AccessKeyId and a SecretAccessKey, otherwise you wont see that.
Now in the appsettings.json add a JSON like this
{
"CosLogs":{
"ServiceURL":"https://s3.eu-de.cloud-object-storage.appdomain.cloud",
"AccessKeyId":"youaccessKeyIdTakenFromCredentialServiceDetail",
"SecretAccessKey":"yourSecreatAccessKeyTakenFromCredentialServiceDetail"
}
}
Keep in mind that the ServiceUrl can be retrieved from IBMCLoud endpoint documentation and it depends on the region/regions where you decided to locate the resource.
In my case since Im using EU Germany my serviceUrl is: s3.eu-de.cloud-object-storage.appdomain.cloud
In Startup.cs add the following
var awsOptions = configuration.GetAWSOptions("CosLogs");
var accessKeyId = configuration.GetValue<string>("CosLogs:AccessKeyId");
var secretAccessKey = configuration.GetValue<string>("CosLogs:SecretAccessKey");
awsOptions.Credentials = new BasicAWSCredentials(accessKeyId,secretAccessKey);
services.AddDefaultAWSOptions(awsOptions);
services.AddAWSService<IAmazonS3>();
Use it on your classes using the DI. Example:
/// <summary>
/// The S3 Client (COS is S3 compatible)
// </summary>
private readonly IAmazonS3 s3Client;
public CosService(IAmazonS3 s3Client, ILogger<CosService> logger)
{
this.s3Client = s3Client;
this.logger = logger;
}
public async Task DoCosCallAsync(CancellationToken cancellationToken){
var bucketList= await s3Client.ListBucketsAsync(cancellationToken);
}
Relevant packages installed:
NetCore 3.1.1x
AWSSDK.S3 3.7.1.5
AWSSDK.Extensions.NETCore.Setup 3.7.0.1
I can successfully send the InitiateMultipartUploadRequest and get InitiateMultipartUploadResponse back, but then get Access Denied error when sending the 1st UploadPartRequest.
Note that all of the below cases upload the document successfully:
Exactly the same code (i.e. using multipart upload), but to a different bucket that uses SSE-S3 encryption.
Using low-level API and uploading the document in one go, i.e. creating PutObjectRequest and then calling amazonS3Client.PutObjectAsync(putObjectRequest).
Using high-level API TransferUtility class.
Maybe the encryption key was not forwarded in the call properly.
I'm trying to use JClouds to talk to an OpenStack / swift storage cloud installation that only exposes a S3 API (it does not support the swift / rackspace API).
I tried:
Properties overrides = new Properties();
overrides.setProperty(Constants.PROPERTY_ENDPOINT, CLOUD_SERVIE_ENDPOINT);
// get a context with nova that offers the portable ComputeService api
BlobStoreContext context = new BlobStoreContextFactory().createContext("aws-s3", ident,
password, ImmutableSet.<Module> of(), overrides);
The server replies with an authentication error 403. Using the standard AWS sdk or python boto works fine, so it's not a server problem, but most likely incorrect use of jclouds.
jclouds in fact supports swift, so you don't need to do anything special. I'd recommend using jclouds 1.3.1, and configure the dependency org.jclouds.api/swift
Then, you just need to enter you endpoint, identity, credential
Properties overrides = new Properties();
overrides.setProperty("swift.endpoint", "http://1.1.1.1:8080/auth");
BlobStoreContext context = new BlobStoreContextFactory().createContext("swift", "XXXXXX:YYYYY", "password", ImmutableSet.<Module> of(), overrides);
The following should work for you. It is known to work on vBlob, for example.
import static org.jclouds.s3.reference.S3Constants.PROPERTY_S3_VIRTUAL_HOST_BUCKETS;
...
Properties overrides = new Properties();
overrides.setProperty(PROPERTY_S3_VIRTUAL_HOST_BUCKETS, "false");
BlobStore blobstore = ContextBuilder.newBuilder(new S3ApiMetadata()) // or "s3"
.endpoint("http://host:port")
.credentials(accessKey, secretKey)
.overrides(overrides)
.buildView(BlobStoreContext.class).getBlobStore();
If your clone doesn't accept s3 requests at the root url, you'll need to set another parameter accordingly.
import static org.jclouds.s3.reference.S3Constants.PROPERTY_S3_SERVICE_PATH;
...
overrides.setProperty(PROPERTY_S3_SERVICE_PATH, "/services/Walrus");
...
.endpoint("http://host:port/services/Walrus")