AWS: PutBucketLifecycleConfigurationRequest returns NotImplemented - amazon-s3

I am new to working with AWS particularly s3. I am using the aws go sdk. I am trying to set bucket life cycle rules in the method below;
func SetLifecycle(svc *s3.S3, bucket , id , status, md5 string) (*s3.PutBucketLifecycleConfigurationOutput, error) {
input := &s3.PutBucketLifecycleConfigurationInput{
Bucket: aws.String(bucket),
LifecycleConfiguration: &s3.BucketLifecycleConfiguration{
Rules: []*s3.LifecycleRule{
{
ID: aws.String(id),
Status: aws.String(status),
},
},
},
}
req, resp := svc.PutBucketLifecycleConfigurationRequest(input)
req.HTTPRequest.Header.Set("Content-Md5", string(md5))
err := req.Send()
return resp, err
}
And calling the above method in a test:
func (suite *HeadSuite) TestLifecycleSet() {
assert := suite
//acl := map[string]string{"Authorization": ""}
bucket := GetBucketName()
err := CreateBucket(svc, bucket)
content := strings.NewReader("Enabled")
h := md5.New()
content.WriteTo(h)
sum := h.Sum(nil)
b := make([]byte, base64.StdEncoding.EncodedLen(len(sum)))
base64.StdEncoding.Encode(b,sum)
md5 := string(b)
_, err = SetLifecycle(svc, bucket, "rule1", "Enabled", md5)
assert.Nil(err)
}
I keep getting an error NotImplemented. Why would this be happening? I had originally not added a content-md5 header which I added after reading the putbucketlifecycle documentation. However , I still get an error.

I did not require calculation of the MD5 header. However, I noticed that I needed to set a prefix, which is specified in the documentation. In addition, what version of the SDK are you using?
Here's a working example below
input := &s3.PutBucketLifecycleConfigurationInput{
Bucket: aws.String(bucket),
LifecycleConfiguration: &s3.BucketLifecycleConfiguration{
Rules: []*s3.LifecycleRule{
{
Prefix: aws.String(prefix),
Status: aws.String(status),
ID: aws.String(id),
Expiration: &s3.LifecycleExpiration{
Days: aws.Int64(1),
},
},
},
},
}
req, resp := svc.PutBucketLifecycleConfigurationRequest(input)
if err := req.Send(); err != nil {
panic(err)
}

Related

How i can fetch all data by latitude and longitude from wigle api

I wrote simple script to receive all data from wigle api using wigleapiv2, definitely this endpoint /api/v2/network/search. But I faced the problem, that I can receive only 1000 unique ssid's. I'm changing URL every iteration, and put in URL previous page's searchAfter. How can I fix it and receive all data from certain latitude and longitude?
Here an example of first iteration Uri (https://api.wigle.net/api/v2/network/search?closestLat=12.9&closestLong=1.2&latrange1=1.9&latrange2=1.8&longrange1=1.2&longrange2=1.4)
And here an example of remaining iterations uris (https://api.wigle.net/api/v2/network/search?closestLat=12.9&closestLong=1.2&latrange1=1.9&latrange2=1.8&longrange1=1.2&longrange2=1.4&searchAfter=1976621348&first=1). For every iteration I'm changing searchAfter and first.
It would be great id someone can say me where I'm doing wrong:)
I've tried to using only first or search after parameters, but it has the same result. One mark that I noticed, that when I'm using only searchAfter param I can receive only 100 unique ssids, but when I'm using both (searchAfter and first) I can receive 1000 unique ssids.
Here my main.go code
var (
wg = sync.WaitGroup{}
receiveResp = make(chan []*response.WiFiNetworkWithLocation, 100)
)
func main() {
startTime := time.Now()
viper.AddConfigPath(".")
viper.SetConfigFile("config.json")
if err := viper.ReadInConfig(); err != nil {
log.Fatal("error trying read from config: %w", err)
}
u := user.NewUser(viper.GetString("users.user.username"), viper.GetString("users.user.password"))
db, err := postgres.NewPG()
if err != nil {
log.Fatalf("Cannot create postgres connection: %v", err)
}
postgres.WG.Add(1)
go getResponse(u)
go parseResponse(db)
postgres.WG.Wait()
fmt.Printf("Execution time: %v ", time.Since(startTime))
}
func getResponse(u *user.Creds) {
url := fmt.Sprintf("%s? closestLat=%s&closestLong=%s&latrange1=%s&latrange2=%s&longrange1=%s&longrange2=%s",
viper.GetString("wigle.url"),
viper.GetString("queries.closestLat"),
viper.GetString("queries.closestLong"),
viper.GetString("queries.latrange1"),
viper.GetString("queries.latrange2"),
viper.GetString("queries.longrange1"),
viper.GetString("queries.longrange2"),
)
j := 0
i := 0
for {
i++
fmt.Println(url)
req, err := http.NewRequest("GET", url, bytes.NewBuffer([]byte("")))
if err != nil {
log.Printf("Failed wraps request: %v", err)
continue
}
req.SetBasicAuth(u.Username, u.Password)
c := http.Client{}
resp, err := c.Do(req)
if err != nil {
log.Printf("Failed send request: %v", err)
continue
}
bytes, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Printf("Failed read response body: %v", err)
continue
}
var r response.NetSearchResponse
if err := json.Unmarshal(bytes, &r); err != nil {
log.Printf("Failed unmarshal: %v", err)
continue
}
receiveResp <- r.Results
fmt.Println(r.TotalResults, r.SearchAfter)
if r.SearchAfter == "" {
postgres.WG.Done()
return
}
url = fmt.Sprintf("%s? closestLat=%s&closestLong=%s&latrange1=%s&latrange2=%s&longrange1=%s&longrange2=%s&searchAfter=%s&first=%v" ,
viper.GetString("wigle.url"),
viper.GetString("queries.closestLat"),
viper.GetString("queries.closestLong"),
viper.GetString("queries.latrange1"),
viper.GetString("queries.latrange2"),
viper.GetString("queries.longrange1"),
viper.GetString("queries.longrange2"),
r.SearchAfter,
i,
)
j++
fmt.Println(j)
}
func parseResponse(db *sql.DB) {
for {
select {
case responses := <-receiveResp:
clearResponses := make([]response.WiFiNetworkWithLocation, 0, len(responses))
for _, val := range responses {
clearResponses = append(clearResponses, *val)
}
postgres.WG.Add(1)
go postgres.SaveToDB(db, "test", clearResponses)
}
}
}

AWS Workdocs file upload

I have a use case where I need to upload csv files to workdocs. I'm using golang language and I receive the error as "The request signature we calculated does not match the signature you provided." I'm using InitiateDocumentVersionUpload with IAM user credentials. Can you please help me as what might be causing this error.
optionsWd := workdocs.Options{Credentials: credentials.NewStaticCredentialsProvider(request.AccessKeyId, request.SecretAccessKey, ""),
Region: "us-east-1"}
client := workdocs.New(optionsWd)
folderId := "e38c72c9ae6918109b573a17ece5f24e7a353374672b627b1b3b54918354cd5e"
docName := "testdoc"
docType := "text/csv"
data, err := r.S3.GetGetObject(ctx, "test-bucket", s3Path)
params := workdocs.InitiateDocumentVersionUploadInput{
ParentFolderId: &folderId,
Name: &docName,
ContentType: &docType,
}
res, err := client.InitiateDocumentVersionUpload(ctx, &params)
if err != nil {
fmt.Println(err)
}
fmt.Println(res.Metadata)
resval := *res.UploadMetadata
urlVal := *resval.UploadUrl
signedHeadVal := resval.SignedHeaders
fmt.Println(urlVal)
fmt.Println(signedHeadVal)
metadata := *res.Metadata
fmt.Println(metadata)
fmt.Println(signedHeadVal)
wdclient := &http.Client{}
req, err := http.NewRequest(http.MethodPut, urlVal, strings.NewReader(data))
if err != nil {
fmt.Println(err)
}
req.Header.Set("Content-Type", "text/csv")
signer := v4.NewSigner()
credsVal := aws.Credentials{
AccessKeyID: aws.ToString(&request.AccessKeyId),
SecretAccessKey: aws.ToString(&request.SecretAccessKey),
SessionToken: "",
}
requestBodyBytes, _ := ioutil.ReadAll(req.Body)
sha := sha256.Sum256(requestBodyBytes)
payloadHash := hex.EncodeToString(sha[:])
if err != nil {
fmt.Println(err)
}
signer.SignHTTP(req.Context(), credsVal, req, payloadHash, "s3", "us-east-1", time.Now())
_, err = wdclient.Do(req)
if err != nil {
fmt.Println(err)
}
I tried the code mentioned above and unable to resolve the error. Expectation is to upload the file in workdocs.

How to write a response for kubernetes admission controller

I am trying to write a simple admission controller for pod naming (validation) but for some reason I am generating a wrong response.
Here is my code:
package main
import (
"fmt"
"encoding/json"
"io/ioutil"
"net/http"
"github.com/golang/glog"
// for Kubernetes
"k8s.io/api/admission/v1beta1"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"regexp"
)
type myValidServerhandler struct {
}
// this is the handler fuction from the HTTP server
func (gs *myValidServerhandler) serve(w http.ResponseWriter, r *http.Request) {
var Body []byte
if r.Body != nil {
if data , err := ioutil.ReadAll(r.Body); err == nil {
Body = data
}
}
if len(Body) == 0 {
glog.Error("Unable to retrive Body from API")
http.Error(w,"Empty Body", http.StatusBadRequest)
return
}
glog.Info("Received Request")
// this is where I make sure the request is for the validation prefix
if r.URL.Path != "/validate" {
glog.Error("Not a Validataion String")
http.Error(w,"Not a Validataion String", http.StatusBadRequest)
return
}
// in this part the function takes the AdmissionReivew and make sure in is in the right
// JSON format
arRequest := &v1beta1.AdmissionReview{}
if err := json.Unmarshal(Body, arRequest); err != nil {
glog.Error("incorrect Body")
http.Error(w, "incorrect Body", http.StatusBadRequest)
return
}
raw := arRequest.Request.Object.Raw
pod := v1.Pod{}
if err := json.Unmarshal(raw, &pod); err != nil {
glog.Error("Error Deserializing Pod")
return
}
// this is where I make sure the pod name contains the kuku string
podnamingReg := regexp.MustCompile(`kuku`)
if podnamingReg.MatchString(string(pod.Name)) {
return
} else {
glog.Error("the pod does not contain \"kuku\"")
http.Error(w, "the pod does not contain \"kuku\"", http.StatusBadRequest)
return
}
// I think the main problem is with this part of the code because the
// error from the events I getting in the Kubernetes namespace is that
// I am sending 200 without a body response
arResponse := v1beta1.AdmissionReview{
Response: &v1beta1.AdmissionResponse{
Result: &metav1.Status{},
Allowed: true,
},
}
// generating the JSON response after the validation
resp, err := json.Marshal(arResponse)
if err != nil {
glog.Error("Can't encode response:", err)
http.Error(w, fmt.Sprintf("couldn't encode response: %v", err), http.StatusInternalServerError)
}
glog.Infof("Ready to write response ...")
if _, err := w.Write(resp); err != nil {
glog.Error("Can't write response", err)
http.Error(w, fmt.Sprintf("cloud not write response: %v", err), http.StatusInternalServerError)
}
}
The code is working as expected except for a positive output (where the pod name meets the criteria)
there is another file with a main just grabbing the TLS files and starting the HTTP service.
so after a few digging I found what was wrong with my code
first this part
if podnamingReg.MatchString(string(pod.Name)) {
return
} else {
glog.Error("the pod does not contain \"kuku\"")
http.Error(w, "the pod does not contain \"kuku\"", http.StatusBadRequest)
return
}
by writing "return" twice I discarded the rest of the code and more so I haven't attached the request UID to the response UID and because I am using the v1 and not the v1beta1 I needed to adding the APIVersion in the response
so the rest of the code looks like :
arResponse := v1beta1.AdmissionReview{
Response: &v1beta1.AdmissionResponse{
Result: &metav1.Status{},
Allowed: false,
},
}
podnamingReg := regexp.MustCompile(`kuku`)
if podnamingReg.MatchString(string(pod.Name)) {
fmt.Printf("the pod %s is up to the name standard", pod.Name)
arResponse.Response.Allowed = true
}
arResponse.APIVersion = "admission.k8s.io/v1"
arResponse.Kind = arRequest.Kind
arResponse.Response.UID = arRequest.Request.UID
so I needed to add the 2 parts and make sure that in case the pod name is not up to standard then I need to return the right response

Stream file upload to AWS S3 using go

I want to stream a multipart/form-data (large) file upload directly to AWS S3 with as little memory and file disk footprint as possible. How can I achieve this? Resources online only explain how to upload a file and store it locally on the server.
You can use upload manager to stream the file and upload it, you can read comments in source code
you can also configure params to set the part size, concurrency & max upload parts, below is a sample code for reference.
package main
import (
"fmt"
"os"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
)
var filename = "file_name.zip"
var myBucket = "myBucket"
var myKey = "file_name.zip"
var accessKey = ""
var accessSecret = ""
func main() {
var awsConfig *aws.Config
if accessKey == "" || accessSecret == "" {
//load default credentials
awsConfig = &aws.Config{
Region: aws.String("us-west-2"),
}
} else {
awsConfig = &aws.Config{
Region: aws.String("us-west-2"),
Credentials: credentials.NewStaticCredentials(accessKey, accessSecret, ""),
}
}
// The session the S3 Uploader will use
sess := session.Must(session.NewSession(awsConfig))
// Create an uploader with the session and default options
//uploader := s3manager.NewUploader(sess)
// Create an uploader with the session and custom options
uploader := s3manager.NewUploader(sess, func(u *s3manager.Uploader) {
u.PartSize = 5 * 1024 * 1024 // The minimum/default allowed part size is 5MB
u.Concurrency = 2 // default is 5
})
//open the file
f, err := os.Open(filename)
if err != nil {
fmt.Printf("failed to open file %q, %v", filename, err)
return
}
//defer f.Close()
// Upload the file to S3.
result, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(myBucket),
Key: aws.String(myKey),
Body: f,
})
//in case it fails to upload
if err != nil {
fmt.Printf("failed to upload file, %v", err)
return
}
fmt.Printf("file uploaded to, %s\n", result.Location)
}
you can do this using minio-go :
n, err := s3Client.PutObject("bucket-name", "objectName", object, size, "application/octet-stream")
PutObject() automatically does multipart upload internally. Example
Another option is to mount the S3 bucket with goofys and then stream your writes to the mountpoint. goofys does not buffer the content locally so it will work fine with large files.
Was trying to do this with the aws-sdk v2 package so had to change the code of #maaz a bit. Am leaving it here for others -
type TokenMeta struct {
AccessToken string
SecretToken string
SessionToken string
BucketName string
}
// Create S3Client struct with the token meta and use it as a receiver for this method
func (s3Client S3Client) StreamUpload(fileToUpload string, fileKey string) error {
accessKey := s3Client.TokenMeta.AccessToken
secretKey := s3Client.TokenMeta.SecretToken
awsConfig, err := config.LoadDefaultConfig(context.TODO(),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(accessKey, secretKey, s3Client.TokenMeta.SessionToken)),
)
if err != nil {
return fmt.Errorf("error creating aws config: %v", err)
}
client := s3.NewFromConfig(awsConfig)
uploader := manager.NewUploader(client, func(u *manager.Uploader) {
u.PartSize = 5 * 1024 * 1024
u.BufferProvider = manager.NewBufferedReadSeekerWriteToPool(10 * 1024 * 1024)
})
f, err := os.Open(fileToUpload)
if err != nil {
return fmt.Errorf("failed to open fileToUpload %q, %v", fileToUpload, err)
}
defer func(f *os.File) {
err := f.Close()
if err != nil {
fmt.Errorf("error closing fileToUpload: %v", err)
}
}(f)
inputObj := &s3.PutObjectInput{
Bucket: aws.String(s3Client.TokenMeta.BucketName),
Key: aws.String(fileKey),
Body: f,
}
uploadResult, err := uploader.Upload(context.TODO(), inputObj)
if err != nil {
return fmt.Errorf("failed to uploadResult fileToUpload, %v", err)
}
fmt.Printf("%s uploaded to, %s\n", fileToUpload, uploadResult.Location)
return nil
}
I didn't try it but if i were you id try the multi part upload option .
you can read the doc multipartupload .
here is go example for multipart upload and multipart upload abort.

Go Connecting to S3

Working on learning Go, and I am writing a component to manage pictures.
I've been looking at the s3 library here: https://godoc.org/launchpad.net/goamz/s3#ACL
In Node, I use the Knox client and connect to my bucket like this:
var bucket = knox.createClient({
key: config.get('AWS_KEY'),
secret: config.get('AWS_SECRET'),
bucket: "bucketName"
});
In the Go s3 library I see all of the functions I need to work with the s3 bucket, but I can't find the connect function - similar to this one above.
So far, I've found this in the Docs:
type Auth struct {
AccessKey, SecretKey string
}
And it seems like I need to call this function:
func EnvAuth() (auth Auth, err error)
This is the EnvAuth function:
func EnvAuth() (auth Auth, err error) {
auth.AccessKey = os.Getenv("AWS_ACCESS_KEY_ID")
auth.SecretKey = os.Getenv("AWS_SECRET_ACCESS_KEY")
// We fallback to EC2_ env variables if the AWS_ variants are not used.
if auth.AccessKey == "" && auth.SecretKey == "" {
auth.AccessKey = os.Getenv("EC2_ACCESS_KEY")
auth.SecretKey = os.Getenv("EC2_SECRET_KEY")
}
if auth.AccessKey == "" {
err = errors.New("AWS_ACCESS_KEY_ID not found in environment")
}
if auth.SecretKey == "" {
err = errors.New("AWS_SECRET_ACCESS_KEY not found in environment")
}
return
}
In the S3 docs, I see all of the things that I need. I am just unsure about how I use the library, this is the first time I use a Go library.
A guide on connecting to AWS/S3, then making a delete call would be very helpful!
Many thanks :)
It's probably easier than you've thought. This sample code lists a bucket. You have to use your credentials and a bucket name, of course...
package main
import (
"fmt"
"launchpad.net/goamz/aws"
"launchpad.net/goamz/s3"
"log"
)
func main() {
auth := aws.Auth{
AccessKey: "ASDFASDFASDFASDK",
SecretKey: "DSFSDFDWESDADSFASDFADFDSFASDF",
}
euwest := aws.EUWest
connection := s3.New(auth, euwest)
mybucket := connection.Bucket("mytotallysecretbucket")
res, err := mybucket.List("", "", "", 1000)
if err != nil {
log.Fatal(err)
}
for _, v := range res.Contents {
fmt.Println(v.Key)
}
}
The original post uses the goamz library. AWS maintains the official aws-sdk-go library which should be used instead.
See the connect method in the below example, which lists all keys in a specific bucket using official Go sdk from AWS:
package main
import (
"fmt"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
)
func main() {
svc := s3.New(session.New(), &aws.Config{Region: aws.String("us-east-1")})
params := &s3.ListObjectsInput{
Bucket: aws.String("bucket"),
}
resp, _ := svc.ListObjects(params)
for _, key := range resp.Contents {
fmt.Println(*key.Key)
}
}