Use JClouds to talk to non AWS cloud with S3 API - amazon-s3

I'm trying to use JClouds to talk to an OpenStack / swift storage cloud installation that only exposes a S3 API (it does not support the swift / rackspace API).
I tried:
Properties overrides = new Properties();
overrides.setProperty(Constants.PROPERTY_ENDPOINT, CLOUD_SERVIE_ENDPOINT);
// get a context with nova that offers the portable ComputeService api
BlobStoreContext context = new BlobStoreContextFactory().createContext("aws-s3", ident,
password, ImmutableSet.<Module> of(), overrides);
The server replies with an authentication error 403. Using the standard AWS sdk or python boto works fine, so it's not a server problem, but most likely incorrect use of jclouds.

jclouds in fact supports swift, so you don't need to do anything special. I'd recommend using jclouds 1.3.1, and configure the dependency org.jclouds.api/swift
Then, you just need to enter you endpoint, identity, credential
Properties overrides = new Properties();
overrides.setProperty("swift.endpoint", "http://1.1.1.1:8080/auth");
BlobStoreContext context = new BlobStoreContextFactory().createContext("swift", "XXXXXX:YYYYY", "password", ImmutableSet.<Module> of(), overrides);

The following should work for you. It is known to work on vBlob, for example.
import static org.jclouds.s3.reference.S3Constants.PROPERTY_S3_VIRTUAL_HOST_BUCKETS;
...
Properties overrides = new Properties();
overrides.setProperty(PROPERTY_S3_VIRTUAL_HOST_BUCKETS, "false");
BlobStore blobstore = ContextBuilder.newBuilder(new S3ApiMetadata()) // or "s3"
.endpoint("http://host:port")
.credentials(accessKey, secretKey)
.overrides(overrides)
.buildView(BlobStoreContext.class).getBlobStore();
If your clone doesn't accept s3 requests at the root url, you'll need to set another parameter accordingly.
import static org.jclouds.s3.reference.S3Constants.PROPERTY_S3_SERVICE_PATH;
...
overrides.setProperty(PROPERTY_S3_SERVICE_PATH, "/services/Walrus");
...
.endpoint("http://host:port/services/Walrus")

Related

AWS SAM: How to initialize S3Client credentials from a lambda function handler

I am building an HTTP API using sam local start-api. Each endpoint of this API is mapped to a lambda handler I have written in Javascript code.
One of these lambda handlers requires to download and upload files from S3, for which I am using this S3Client from #aws-sdk/client-s3. I have tried to initialize such client as follows:
const s3Client = new S3Client({
region: "eu-west-1"
});
expecting that it reads the credentials from my ~/.aws/credentials file, but it does not. All operations via this client fail due to lack of permissions.
I would like to know what is the correct way of using this S3Client from within a lambda handler that I am testing locally using sam local.
If you're not using the default profile in your AWS credentials file, Sam CLI commands have a --profile option to specify which profile to use.
For example:
sam local start-api --profile my_profile
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-start-api.html

Using AWSSDK.S3 to upload files to IBM COS (S3 compatible) with .net-core

I have some code using AWSSDK.S3 to upload data to S3, no mystery.
Since IBM claims it's Cloud Object Storage to be S3 compatible, it would be possible to use AWSSDK.S3 to upload files to IBM COS by only changing the ServiceURL at appsettings.json?
Does anybody did that before?
I'm not sure about appsettings.json but yes, if you set the ServiceURL to the config used to create a client it should work transparently. Obviously any AWS features that aren't supported by COS wouldn't work, and any COS extensions (like API key auth, or Key Protect, etc) wouldn't be available.
Something like:
AmazonS3Config S3Config = new AmazonS3Config {ServiceURL = "https://s3.us.cloud-object-storage.appdomain.cloud"};
string accessKeyId = "<accesskey>";
string secretAccessKey = "<secretkey>";
BasicAWSCredentials credentials = new BasicAWSCredentials(accessKeyId, secretAccessKey);
AmazonS3Client client = new AmazonS3Client(credentials, S3Config);
I was able to use the AWSSDK.S3 on a Net Core 3.1.17 backend.
My aim was to use the IBM COS (Cloud Object Storage) service: read, write and delete files from it.
The usage of AWSDK.S3 is due to the fact that right now there is no nuget package from IBM or from others which can help us (developers) on this, so there are 2 ways:
implement all these features (read, write, delete) manually via REST API of IBM COS service (which should be S3 compliant)
try to use AWSSDK.S3 package S3 compliant
Thanks to the previous answer I did a little bit of research and refinement and these are the steps in order to make it working even with the Dependency Injection of Microsoft.
In the IBM Cloud platform, create Service credentials including HMAC Credentials. This is an important step which allows you to create credentials with an AccessKeyId and a SecretAccessKey, otherwise you wont see that.
Now in the appsettings.json add a JSON like this
{
"CosLogs":{
"ServiceURL":"https://s3.eu-de.cloud-object-storage.appdomain.cloud",
"AccessKeyId":"youaccessKeyIdTakenFromCredentialServiceDetail",
"SecretAccessKey":"yourSecreatAccessKeyTakenFromCredentialServiceDetail"
}
}
Keep in mind that the ServiceUrl can be retrieved from IBMCLoud endpoint documentation and it depends on the region/regions where you decided to locate the resource.
In my case since Im using EU Germany my serviceUrl is: s3.eu-de.cloud-object-storage.appdomain.cloud
In Startup.cs add the following
var awsOptions = configuration.GetAWSOptions("CosLogs");
var accessKeyId = configuration.GetValue<string>("CosLogs:AccessKeyId");
var secretAccessKey = configuration.GetValue<string>("CosLogs:SecretAccessKey");
awsOptions.Credentials = new BasicAWSCredentials(accessKeyId,secretAccessKey);
services.AddDefaultAWSOptions(awsOptions);
services.AddAWSService<IAmazonS3>();
Use it on your classes using the DI. Example:
/// <summary>
/// The S3 Client (COS is S3 compatible)
// </summary>
private readonly IAmazonS3 s3Client;
public CosService(IAmazonS3 s3Client, ILogger<CosService> logger)
{
this.s3Client = s3Client;
this.logger = logger;
}
public async Task DoCosCallAsync(CancellationToken cancellationToken){
var bucketList= await s3Client.ListBucketsAsync(cancellationToken);
}
Relevant packages installed:
NetCore 3.1.1x
AWSSDK.S3 3.7.1.5
AWSSDK.Extensions.NETCore.Setup 3.7.0.1

Unexpected v4 signed url using com.amazonaws:aws-java-sdk 1.11.18

We're creaeing amazon S3 signed urls using (com.amazonaws:aws-java-sdk version 1.11.18) -
AmazonS3 s3 = new AmazonS3Client(credentials);
s3.generatePresignedUrl(bucketName, objectName, expiration, method);
We expect to get a signed url that contains a query parameter called “signature” (v2 signing).
We noticed that in our servers, some requests result in v4 signing - where we unexpectedly get an "x-amz-signature” query parameter as part of the signed url.
Once this starts - it’s reproducible on the server for the same requested s3 object.
However, requests to sign other objects will still sign using v2.
Restarting the tomcat service on the broken server “fixes” the issue.
Any idea what could cause the library to start signing some objects with v4?
The issue was reproduced in the current version of the sdk (1.11.244).
Eventually we went about manually setting the config -
s3 = new AmazonS3Client(credentials,
new ClientConfiguration().withSignerOverride("NoOpSignerType"));
We suspect that this behaviour was caused because of the internal implementation of the createSigner method, signs requests with V4 if the bucket is contained in the map. -
private static final Map<String, String> bucketRegionCache

(403) Access Not Configured. Google Cloud Datastore API

I am getting the following exception
Google_Service_Exception: Error calling POST https://www.googleapis.com/datastore/v1beta2/datasets/smartflowviewer/lookup: (403) Access Not Configured. Google Cloud Datastore API has not been used in project 529103574478 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/datastore/overview?project=529103574478 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
while trying to access the datastore
$service = new Google_Service_Datastore($client);
$service_dataset = $service->datasets;
$path = new Google_Service_Datastore_KeyPathElement();
$path->setKind('auth');
$path->setName($email);
$key = new Google_Service_Datastore_Key();
$key->setPath([$path]);
$lookup_req = new Google_Service_Datastore_LookupRequest();
$lookup_req->setKeys([$key]);
$response = $service_dataset->lookup('smartflowviewer', $lookup_req);
I am using OAuth web client to work with API
$client = new Google_Client();
$client->setClientId($cfg['CLIENT_ID']);
$client->setClientSecret($cfg['CLIENT_SECRET']);
$client->setAccessType('offline');
The project had been working completely fine until yesterday. I have not deployed any new code or changed any setting during the latest month. And suddenly it started throwing this error.
Any ideas of what might be causing this behavior? Thanks a lot.
The Cloud Datastore v1beta2 API is deprecated, but you can update your code to use the Cloud Datastore v1 API.
One option is to look at the php-gds library.

JetS3t : Amazon S3 : How to dynamically change endpoints

My understanding is that before Jets3t 0.7.4 S3 endpoint was set statically at S3Service::setS3EndpointHost. So there was no way to use Jets3t to Get/Put content to S3 using different S3 endpoints in same application.
In Jets3t 0.7.4 release notes it's written -
"Deprecated static methods in S3Service for generating signed URLs. The new non-static method equivalents should be used from now on to avoid dependency on a VM-wide S3 endpoint constant."
Is it possible now to change S3 endpoints dynamically ? If yes how to do it.. is there a setS3Endpoint method available ?
You can set it like this:
private void setS3Endpoint(final String endpoint) {
final Jets3tProperties props = Jets3tProperties.getInstance(Constants.JETS3T_PROPERTIES_FILENAME);
props.setProperty("s3service.s3-endpoint", endpoint);
}
There is no such method in the jets3t api. The endpoint is set in the Jets3t.properties file You could (theoretically) pull in the Jets3t.properties file and change it with a helper class in Java, then create a new s3Service object that hopefully has the new config.