Working on learning Go, and I am writing a component to manage pictures.
I've been looking at the s3 library here: https://godoc.org/launchpad.net/goamz/s3#ACL
In Node, I use the Knox client and connect to my bucket like this:
var bucket = knox.createClient({
key: config.get('AWS_KEY'),
secret: config.get('AWS_SECRET'),
bucket: "bucketName"
});
In the Go s3 library I see all of the functions I need to work with the s3 bucket, but I can't find the connect function - similar to this one above.
So far, I've found this in the Docs:
type Auth struct {
AccessKey, SecretKey string
}
And it seems like I need to call this function:
func EnvAuth() (auth Auth, err error)
This is the EnvAuth function:
func EnvAuth() (auth Auth, err error) {
auth.AccessKey = os.Getenv("AWS_ACCESS_KEY_ID")
auth.SecretKey = os.Getenv("AWS_SECRET_ACCESS_KEY")
// We fallback to EC2_ env variables if the AWS_ variants are not used.
if auth.AccessKey == "" && auth.SecretKey == "" {
auth.AccessKey = os.Getenv("EC2_ACCESS_KEY")
auth.SecretKey = os.Getenv("EC2_SECRET_KEY")
}
if auth.AccessKey == "" {
err = errors.New("AWS_ACCESS_KEY_ID not found in environment")
}
if auth.SecretKey == "" {
err = errors.New("AWS_SECRET_ACCESS_KEY not found in environment")
}
return
}
In the S3 docs, I see all of the things that I need. I am just unsure about how I use the library, this is the first time I use a Go library.
A guide on connecting to AWS/S3, then making a delete call would be very helpful!
Many thanks :)
It's probably easier than you've thought. This sample code lists a bucket. You have to use your credentials and a bucket name, of course...
package main
import (
"fmt"
"launchpad.net/goamz/aws"
"launchpad.net/goamz/s3"
"log"
)
func main() {
auth := aws.Auth{
AccessKey: "ASDFASDFASDFASDK",
SecretKey: "DSFSDFDWESDADSFASDFADFDSFASDF",
}
euwest := aws.EUWest
connection := s3.New(auth, euwest)
mybucket := connection.Bucket("mytotallysecretbucket")
res, err := mybucket.List("", "", "", 1000)
if err != nil {
log.Fatal(err)
}
for _, v := range res.Contents {
fmt.Println(v.Key)
}
}
The original post uses the goamz library. AWS maintains the official aws-sdk-go library which should be used instead.
See the connect method in the below example, which lists all keys in a specific bucket using official Go sdk from AWS:
package main
import (
"fmt"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
)
func main() {
svc := s3.New(session.New(), &aws.Config{Region: aws.String("us-east-1")})
params := &s3.ListObjectsInput{
Bucket: aws.String("bucket"),
}
resp, _ := svc.ListObjects(params)
for _, key := range resp.Contents {
fmt.Println(*key.Key)
}
}
Related
I'm using the Godog library to implement some cucumbers tests for my api code, right now I'm only testing one endpoint but I'm hitting an error where it looks like it's expecting to have a server open. I created a httptest server that listens to port 8080 but the tests are failing with a 404.
If I run my cucumber in debug mode they work but if I use the run test command they fail cos the expect an open port dial tcp localhost:8080. Could someone point me to the right direction since I quite don't know where I'm failing.
This is my godog_test
`
func mockServer() *httptest.Server {
router := mux.NewRouter()
u, _ := url.Parse("http://localhost:8080")
l, _ := net.Listen("tcp", u.Host)
server := httptest.NewUnstartedServer(router)
_ = server.Listener.Close()
server.Listener = l
server.Start()
return server
}
func killMockServer(server *httptest.Server) {
server.Close()
}
func TestFeatures(t *testing.T) {
suite := godog.TestSuite{
TestSuiteInitializer: InitializeTestSuite,
ScenarioInitializer: InitializeScenario,
Options: &godog.Options{
Format: "pretty",
Paths: []string{"features"},
TestingT: t,
},
}
if suite.Run() != 0 {
t.Fatal("non-zero status returned, failed to run feature tests")
}
}
func InitializeTestSuite(ctx *godog.TestSuiteContext) {
var server *httptest.Server
ctx.BeforeSuite(func() {
server = mockServer()
})
ctx.AfterSuite(func() {
fmt.Println("shutting down everything")
killMockServer(server)
})
}
`
Post step that I'm testing
`
func iCallPOSTTo(path string) error {
req, err := json.Marshal(reqBody)
if err != nil {
return err
}
request, err := http.NewRequest(
http.MethodPost,
endpoint+path,
bytes.NewReader(reqBody),
)
res, err := http.DefaultClient.Do(request)
if err != nil {
return err
}
resBody, err := io.ReadAll(res.Body)
if err != nil {
return err
}
res.Body.Close()
[REDACTED]
return nil
}
`
I tried using a mock server to open port 8080 since at first I was receiving a connection refused error, after that I'm getting a 404 which means that my test is not reaching my actual function that processes the post request. I'm not sure if the mock server is the correct approach on this case.
I am trying to write a simple admission controller for pod naming (validation) but for some reason I am generating a wrong response.
Here is my code:
package main
import (
"fmt"
"encoding/json"
"io/ioutil"
"net/http"
"github.com/golang/glog"
// for Kubernetes
"k8s.io/api/admission/v1beta1"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"regexp"
)
type myValidServerhandler struct {
}
// this is the handler fuction from the HTTP server
func (gs *myValidServerhandler) serve(w http.ResponseWriter, r *http.Request) {
var Body []byte
if r.Body != nil {
if data , err := ioutil.ReadAll(r.Body); err == nil {
Body = data
}
}
if len(Body) == 0 {
glog.Error("Unable to retrive Body from API")
http.Error(w,"Empty Body", http.StatusBadRequest)
return
}
glog.Info("Received Request")
// this is where I make sure the request is for the validation prefix
if r.URL.Path != "/validate" {
glog.Error("Not a Validataion String")
http.Error(w,"Not a Validataion String", http.StatusBadRequest)
return
}
// in this part the function takes the AdmissionReivew and make sure in is in the right
// JSON format
arRequest := &v1beta1.AdmissionReview{}
if err := json.Unmarshal(Body, arRequest); err != nil {
glog.Error("incorrect Body")
http.Error(w, "incorrect Body", http.StatusBadRequest)
return
}
raw := arRequest.Request.Object.Raw
pod := v1.Pod{}
if err := json.Unmarshal(raw, &pod); err != nil {
glog.Error("Error Deserializing Pod")
return
}
// this is where I make sure the pod name contains the kuku string
podnamingReg := regexp.MustCompile(`kuku`)
if podnamingReg.MatchString(string(pod.Name)) {
return
} else {
glog.Error("the pod does not contain \"kuku\"")
http.Error(w, "the pod does not contain \"kuku\"", http.StatusBadRequest)
return
}
// I think the main problem is with this part of the code because the
// error from the events I getting in the Kubernetes namespace is that
// I am sending 200 without a body response
arResponse := v1beta1.AdmissionReview{
Response: &v1beta1.AdmissionResponse{
Result: &metav1.Status{},
Allowed: true,
},
}
// generating the JSON response after the validation
resp, err := json.Marshal(arResponse)
if err != nil {
glog.Error("Can't encode response:", err)
http.Error(w, fmt.Sprintf("couldn't encode response: %v", err), http.StatusInternalServerError)
}
glog.Infof("Ready to write response ...")
if _, err := w.Write(resp); err != nil {
glog.Error("Can't write response", err)
http.Error(w, fmt.Sprintf("cloud not write response: %v", err), http.StatusInternalServerError)
}
}
The code is working as expected except for a positive output (where the pod name meets the criteria)
there is another file with a main just grabbing the TLS files and starting the HTTP service.
so after a few digging I found what was wrong with my code
first this part
if podnamingReg.MatchString(string(pod.Name)) {
return
} else {
glog.Error("the pod does not contain \"kuku\"")
http.Error(w, "the pod does not contain \"kuku\"", http.StatusBadRequest)
return
}
by writing "return" twice I discarded the rest of the code and more so I haven't attached the request UID to the response UID and because I am using the v1 and not the v1beta1 I needed to adding the APIVersion in the response
so the rest of the code looks like :
arResponse := v1beta1.AdmissionReview{
Response: &v1beta1.AdmissionResponse{
Result: &metav1.Status{},
Allowed: false,
},
}
podnamingReg := regexp.MustCompile(`kuku`)
if podnamingReg.MatchString(string(pod.Name)) {
fmt.Printf("the pod %s is up to the name standard", pod.Name)
arResponse.Response.Allowed = true
}
arResponse.APIVersion = "admission.k8s.io/v1"
arResponse.Kind = arRequest.Kind
arResponse.Response.UID = arRequest.Request.UID
so I needed to add the 2 parts and make sure that in case the pod name is not up to standard then I need to return the right response
Here api to go which should load the file when posting a request of the form
curl -X POST -d "url = http: //site.com/file.txt" http: // localhost: 8000 / submit
But 404 gets out, what's the reason?
Or how to download files via POST in API?
func downloadFile(url string) Task {
var task Task
resp, err := http.Get(url)
if err != nil {
fmt.Println("Error while downloading")
}
defer resp.Body.Close()
filename := strings.Split(url, "/")[len(strings.Split(url, "/"))-1]
fmt.Println(filename)
out, err := os.Create(filename)
if err != nil {
fmt.Println("Error while downloading")
}
defer out.Close()
_, err = io.Copy(out, resp.Body)
fmt.Println("Error while downloading")
}
func submit(c *gin.Context) {
c.Header("Content-Description", "File Transfer")
c.Header("Content-Transfer-Encoding", "binary")
url := c.Param("url")
fmt.Println("url " + url)
task := downloadFile(url)
hashFile(task.ID)
c.JSON(200, task.ID)
}
func main() {
router := gin.Default()
router.POST("/submit/:url", submit)
}
HTTP status 404 means the server couldn't find the requested URL. This appears to make perfect sense given your curl command. You appear to be requesting the URL http://localhost:8000/submit, but your application only has a single route:
router.POST("/submit/:url", submit)
This route requires a second URL segment after /submit, such as /submit/foo.
I am new to working with AWS particularly s3. I am using the aws go sdk. I am trying to set bucket life cycle rules in the method below;
func SetLifecycle(svc *s3.S3, bucket , id , status, md5 string) (*s3.PutBucketLifecycleConfigurationOutput, error) {
input := &s3.PutBucketLifecycleConfigurationInput{
Bucket: aws.String(bucket),
LifecycleConfiguration: &s3.BucketLifecycleConfiguration{
Rules: []*s3.LifecycleRule{
{
ID: aws.String(id),
Status: aws.String(status),
},
},
},
}
req, resp := svc.PutBucketLifecycleConfigurationRequest(input)
req.HTTPRequest.Header.Set("Content-Md5", string(md5))
err := req.Send()
return resp, err
}
And calling the above method in a test:
func (suite *HeadSuite) TestLifecycleSet() {
assert := suite
//acl := map[string]string{"Authorization": ""}
bucket := GetBucketName()
err := CreateBucket(svc, bucket)
content := strings.NewReader("Enabled")
h := md5.New()
content.WriteTo(h)
sum := h.Sum(nil)
b := make([]byte, base64.StdEncoding.EncodedLen(len(sum)))
base64.StdEncoding.Encode(b,sum)
md5 := string(b)
_, err = SetLifecycle(svc, bucket, "rule1", "Enabled", md5)
assert.Nil(err)
}
I keep getting an error NotImplemented. Why would this be happening? I had originally not added a content-md5 header which I added after reading the putbucketlifecycle documentation. However , I still get an error.
I did not require calculation of the MD5 header. However, I noticed that I needed to set a prefix, which is specified in the documentation. In addition, what version of the SDK are you using?
Here's a working example below
input := &s3.PutBucketLifecycleConfigurationInput{
Bucket: aws.String(bucket),
LifecycleConfiguration: &s3.BucketLifecycleConfiguration{
Rules: []*s3.LifecycleRule{
{
Prefix: aws.String(prefix),
Status: aws.String(status),
ID: aws.String(id),
Expiration: &s3.LifecycleExpiration{
Days: aws.Int64(1),
},
},
},
},
}
req, resp := svc.PutBucketLifecycleConfigurationRequest(input)
if err := req.Send(); err != nil {
panic(err)
}
I want to stream a multipart/form-data (large) file upload directly to AWS S3 with as little memory and file disk footprint as possible. How can I achieve this? Resources online only explain how to upload a file and store it locally on the server.
You can use upload manager to stream the file and upload it, you can read comments in source code
you can also configure params to set the part size, concurrency & max upload parts, below is a sample code for reference.
package main
import (
"fmt"
"os"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
)
var filename = "file_name.zip"
var myBucket = "myBucket"
var myKey = "file_name.zip"
var accessKey = ""
var accessSecret = ""
func main() {
var awsConfig *aws.Config
if accessKey == "" || accessSecret == "" {
//load default credentials
awsConfig = &aws.Config{
Region: aws.String("us-west-2"),
}
} else {
awsConfig = &aws.Config{
Region: aws.String("us-west-2"),
Credentials: credentials.NewStaticCredentials(accessKey, accessSecret, ""),
}
}
// The session the S3 Uploader will use
sess := session.Must(session.NewSession(awsConfig))
// Create an uploader with the session and default options
//uploader := s3manager.NewUploader(sess)
// Create an uploader with the session and custom options
uploader := s3manager.NewUploader(sess, func(u *s3manager.Uploader) {
u.PartSize = 5 * 1024 * 1024 // The minimum/default allowed part size is 5MB
u.Concurrency = 2 // default is 5
})
//open the file
f, err := os.Open(filename)
if err != nil {
fmt.Printf("failed to open file %q, %v", filename, err)
return
}
//defer f.Close()
// Upload the file to S3.
result, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(myBucket),
Key: aws.String(myKey),
Body: f,
})
//in case it fails to upload
if err != nil {
fmt.Printf("failed to upload file, %v", err)
return
}
fmt.Printf("file uploaded to, %s\n", result.Location)
}
you can do this using minio-go :
n, err := s3Client.PutObject("bucket-name", "objectName", object, size, "application/octet-stream")
PutObject() automatically does multipart upload internally. Example
Another option is to mount the S3 bucket with goofys and then stream your writes to the mountpoint. goofys does not buffer the content locally so it will work fine with large files.
Was trying to do this with the aws-sdk v2 package so had to change the code of #maaz a bit. Am leaving it here for others -
type TokenMeta struct {
AccessToken string
SecretToken string
SessionToken string
BucketName string
}
// Create S3Client struct with the token meta and use it as a receiver for this method
func (s3Client S3Client) StreamUpload(fileToUpload string, fileKey string) error {
accessKey := s3Client.TokenMeta.AccessToken
secretKey := s3Client.TokenMeta.SecretToken
awsConfig, err := config.LoadDefaultConfig(context.TODO(),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(accessKey, secretKey, s3Client.TokenMeta.SessionToken)),
)
if err != nil {
return fmt.Errorf("error creating aws config: %v", err)
}
client := s3.NewFromConfig(awsConfig)
uploader := manager.NewUploader(client, func(u *manager.Uploader) {
u.PartSize = 5 * 1024 * 1024
u.BufferProvider = manager.NewBufferedReadSeekerWriteToPool(10 * 1024 * 1024)
})
f, err := os.Open(fileToUpload)
if err != nil {
return fmt.Errorf("failed to open fileToUpload %q, %v", fileToUpload, err)
}
defer func(f *os.File) {
err := f.Close()
if err != nil {
fmt.Errorf("error closing fileToUpload: %v", err)
}
}(f)
inputObj := &s3.PutObjectInput{
Bucket: aws.String(s3Client.TokenMeta.BucketName),
Key: aws.String(fileKey),
Body: f,
}
uploadResult, err := uploader.Upload(context.TODO(), inputObj)
if err != nil {
return fmt.Errorf("failed to uploadResult fileToUpload, %v", err)
}
fmt.Printf("%s uploaded to, %s\n", fileToUpload, uploadResult.Location)
return nil
}
I didn't try it but if i were you id try the multi part upload option .
you can read the doc multipartupload .
here is go example for multipart upload and multipart upload abort.