Problems displaying image taken from sqlite3 database in vue.js - vue.js

I'm building a social media-like web app, but I'm having trouble displaying images on the frontend.
For the realization I'm using openAPI , Golang for the backend, sqlte3 database, vue.js for the frontend.
For photos on the database I use a table with the specific fields:
var tablePhoto string
err5 := db.QueryRow(`SELECT name FROM sqlite_master WHERE type='table' AND name='photo';`).Scan(&tablePhoto)
if errors.Is(err5, sql.ErrNoRows) {
sqlStmt := `
CREATE TABLE IF NOT EXISTS photo (photo_id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, user_id_owner_photo INTEGER NOT NULL, image BLOB, date_time datetime NOT NULL, UNIQUE(date_time))`
_, err5 = db.Exec(sqlStmt)
if err5 != nil {
return nil, fmt.Errorf("error creating database structure: %w", err5)
}
}
and a structure of Photo:
type Photo struct {
User_id_owner_photo int
Photo_id int
Image []byte
Date_time time.Time
}
Interacting with the api I upload the photo by inserting the image via binary file and enter it into the database.
func (rt *_router) uploadPhoto(w http.ResponseWriter, r *http.Request, ps httprouter.Params, ctx reqcontext.RequestContext) {
user_id, err := strconv.ParseInt(ps.ByName("user_id"), 10, 0)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
buffer, err := io.ReadAll(r.Body)
// buffer, err := ioutil.ReadAll(r.Body)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
var photo Photo
my_time := time.Now()
photo.User_id_owner_photo = int(user_id)
photo.Image = buffer
photo.Date_time = my_time
dbPhoto, err := rt.db.UploadPhoto(photo.ToDatabase_photo())
if err != nil {
ctx.Logger.WithError(err).Error("can't create the photo!")
w.WriteHeader(http.StatusInternalServerError)
return
}
photo.FromDatabase_photo(dbPhoto)
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(photo)
}
When I upload the photo from the frontend I use a choose file button which allows me to choose a file, and for that button I use a function :
async choose(files_) {
const reader = new FileReader();
console.log(files_)
reader.readAsDataURL(files_)
reader.onloadend = (event) => {
this.image = new Uint8Array(event.target.result);
}
}
which allows me to transform the taken file into blob, once this is done in another view I want to show the image, so I take my profile data with a method in the :
let response = await this.$axios.get("/users/" + Number(this.userId));
this.profile = response.data;
and show the image in a template div:
<img src="'data:image/jpeg;base64,' + photo.Image" width="300" height="300" >
By doing this, the photos are not shown to me
enter image description here
and this is shown to me in the consoleenter image description here

Related

How i can fetch all data by latitude and longitude from wigle api

I wrote simple script to receive all data from wigle api using wigleapiv2, definitely this endpoint /api/v2/network/search. But I faced the problem, that I can receive only 1000 unique ssid's. I'm changing URL every iteration, and put in URL previous page's searchAfter. How can I fix it and receive all data from certain latitude and longitude?
Here an example of first iteration Uri (https://api.wigle.net/api/v2/network/search?closestLat=12.9&closestLong=1.2&latrange1=1.9&latrange2=1.8&longrange1=1.2&longrange2=1.4)
And here an example of remaining iterations uris (https://api.wigle.net/api/v2/network/search?closestLat=12.9&closestLong=1.2&latrange1=1.9&latrange2=1.8&longrange1=1.2&longrange2=1.4&searchAfter=1976621348&first=1). For every iteration I'm changing searchAfter and first.
It would be great id someone can say me where I'm doing wrong:)
I've tried to using only first or search after parameters, but it has the same result. One mark that I noticed, that when I'm using only searchAfter param I can receive only 100 unique ssids, but when I'm using both (searchAfter and first) I can receive 1000 unique ssids.
Here my main.go code
var (
wg = sync.WaitGroup{}
receiveResp = make(chan []*response.WiFiNetworkWithLocation, 100)
)
func main() {
startTime := time.Now()
viper.AddConfigPath(".")
viper.SetConfigFile("config.json")
if err := viper.ReadInConfig(); err != nil {
log.Fatal("error trying read from config: %w", err)
}
u := user.NewUser(viper.GetString("users.user.username"), viper.GetString("users.user.password"))
db, err := postgres.NewPG()
if err != nil {
log.Fatalf("Cannot create postgres connection: %v", err)
}
postgres.WG.Add(1)
go getResponse(u)
go parseResponse(db)
postgres.WG.Wait()
fmt.Printf("Execution time: %v ", time.Since(startTime))
}
func getResponse(u *user.Creds) {
url := fmt.Sprintf("%s? closestLat=%s&closestLong=%s&latrange1=%s&latrange2=%s&longrange1=%s&longrange2=%s",
viper.GetString("wigle.url"),
viper.GetString("queries.closestLat"),
viper.GetString("queries.closestLong"),
viper.GetString("queries.latrange1"),
viper.GetString("queries.latrange2"),
viper.GetString("queries.longrange1"),
viper.GetString("queries.longrange2"),
)
j := 0
i := 0
for {
i++
fmt.Println(url)
req, err := http.NewRequest("GET", url, bytes.NewBuffer([]byte("")))
if err != nil {
log.Printf("Failed wraps request: %v", err)
continue
}
req.SetBasicAuth(u.Username, u.Password)
c := http.Client{}
resp, err := c.Do(req)
if err != nil {
log.Printf("Failed send request: %v", err)
continue
}
bytes, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Printf("Failed read response body: %v", err)
continue
}
var r response.NetSearchResponse
if err := json.Unmarshal(bytes, &r); err != nil {
log.Printf("Failed unmarshal: %v", err)
continue
}
receiveResp <- r.Results
fmt.Println(r.TotalResults, r.SearchAfter)
if r.SearchAfter == "" {
postgres.WG.Done()
return
}
url = fmt.Sprintf("%s? closestLat=%s&closestLong=%s&latrange1=%s&latrange2=%s&longrange1=%s&longrange2=%s&searchAfter=%s&first=%v" ,
viper.GetString("wigle.url"),
viper.GetString("queries.closestLat"),
viper.GetString("queries.closestLong"),
viper.GetString("queries.latrange1"),
viper.GetString("queries.latrange2"),
viper.GetString("queries.longrange1"),
viper.GetString("queries.longrange2"),
r.SearchAfter,
i,
)
j++
fmt.Println(j)
}
func parseResponse(db *sql.DB) {
for {
select {
case responses := <-receiveResp:
clearResponses := make([]response.WiFiNetworkWithLocation, 0, len(responses))
for _, val := range responses {
clearResponses = append(clearResponses, *val)
}
postgres.WG.Add(1)
go postgres.SaveToDB(db, "test", clearResponses)
}
}
}

how to do something repeatedly in a database action?

I am writing a user authentication system in go.First of all I prompt user to signup the form with email, username and password. Then I send a confirmation link to users email. The user must also select a title for his blog.Which is prompted after the confirmation link is clicked. How to ensure that the user don't move to the home page without a title.
My ConfirmEmail function is below:
func ConfirmEmail(w http.ResponseWriter, r *http.Request){
err := r.ParseForm()
if err != nil{
log.Fatal("Unable to parse data")
}
token := r.Form.Get("token")
db.ConnectDB()
current_time := time.Now().Unix()
user_id := 0
var date_generated int64
var date_expires int64
var date_used int64
row := db.Db.QueryRow("Select user_id, date_generated, date_expires, date_used from Token where token = ?", token)
if err := row.Scan(&user_id, &date_generated, &date_expires, &date_used); err != nil{
if err == sql.ErrNoRows{
//todo: no such token provide a link to signup..
fmt.Println("No such rows..")
} else {
log.Fatal("Something went wrong:", err)
}
}
//reuse of the token...
if (date_used != 0){
http.Redirect(w,r, "/signup", http.StatusFound)
}
// use of expired token...
if(date_expires < current_time){
//todo: inform about the expired token and prompt for re confirmation..
fmt.Println("Token expired..")
} else{
//todo: Check for blog title, if null prompt.
var title string
var username string
if err := db.Db.QueryRow("select username, blogTitle from User where user_id = ?", user_id).Scan(&username, &title); err != nil{
if err == sql.ErrNoRows{
http.Redirect(w, r, "/signup", http.StatusFound)
}
}
//want to do this until title is not provided..
if len(title) == 0{
err = templates.ExecuteTemplate(w, "chose-title.html", struct {
Username string
Msg string
}{
Username: username,
Msg: "",
})
if err != nil {
log.Fatal("Unable to render provided template:",err)
}
return
}
_, err = db.Db.Exec("Update Token set date_used = ? where token=?",current_time, token)
if err != nil {
log.Fatal("Unable to update with given data")
}
_, err = db.Db.Exec("Update User set Verified = true where user_id=?",user_id)
if err != nil {
log.Fatal("Unable to update with given data")
} else {
http.Redirect(w, r, "/login", http.StatusFound)
}
}
}
The main problematic part is:(contains snippet from previous block)
if len(title) == 0{
err = templates.ExecuteTemplate(w, "chose-title.html", struct {
Username string
Msg string
}{
Username: username,
Msg: "",
})
if err != nil {
log.Fatal("Unable to render provided template:",err)
}
return
}
_, err = db.Db.Exec("Update Token set date_used = ? where token=?",current_time, token)
if err != nil {
log.Fatal("Unable to update with given data")
}
_, err = db.Db.Exec("Update User set Verified = true where user_id=?",user_id)
if err != nil {
log.Fatal("Unable to update with given data")
} else {
http.Redirect(w, r, "/login", http.StatusFound)
}
}
I can think of a while loop in this, but don't think that would be a feasible option. Is there any other workaround or workflow to check this.

How can I make struct in go for object having array of object inside it?

I am using Vuejs on the frontend and Go language on the backend. My data variable has data in the following format.
var data = {
software_type: this.$props.selected,
selected_solutions: this.fromChildChecked,
};
By doing console.log(data)in frontend, I get following output.
On the backend side, I have struct on this format :
type Technology struct {
ID primitive.ObjectID `json:"_id,omitempty" bson:"_id,omitempty"`
SoftwareType string `json:"software_type" bson:"software_type"`
SelectedSolutions struct {
selectedSolutions []string
} `json:"selected_solutions" bson:"selected_solutions"`
}
I am quite sure about the problem that I am having and it might be due to the difference with the format of data that I am sending and the struct that I have made.
I am using MongoDB as a database.
By submitting the form, data comes to DB in the following format, which means, I am getting an empty object for selected_solutions.
{
"_id":{"$oid":"5f5a1fa8885112e153b5a890"},
"software_type":"Cross-channel Campain Mangment Software",
"selected_solutions":{}
}
This is the format that I expect to be on DB or something similar to below.
{
"_id":{"$oid":"5f5a1fa8885112e153b5a890"},
"software_type":"Cross-channel Campain Mangment Software",
"selected_solutions":{
Adobe Campaign: ["Business to Customer (B2C)", "Business to Business (B2B)"],
Marin Software: ["E-Government", "M-Commerce"],
}
}
How can I change struct to make it compatible with the data that I am trying to send? Thank you in advance for any help.
EDIT: This is how I am submitting data.
postUserDetails() {
var data = {
software_type: this.$props.selected,
selected_solutions: this.fromChildChecked,
};
console.log(data);
const requestOptions = {
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body: JSON.stringify(data),
};
fetch("http://localhost:8080/technology", requestOptions)
.then((response) => {
response.json().then((data) => {
if (data.result === "success") {
//this.response_message = "Registration Successfull";
console.log("data posted successfully");
} else if (data.result === "er") {
// this.response_message = "Reagestraion failed please try again";
console.log("failed to post data");
}
});
})
.catch((error) => {
console.error("error is", error);
});
},
mounted() {
this.postUserDetails();
},
This is the function for backend controller.
//TechnologyHandler handles checkbox selection for technology section
func TechnologyHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("content-type", "application/json")
w.Header().Add("Access-Control-Allow-Credentials", "true")
var technologyChoices model.Technology
//var selectedSolution model.Selected
//reads request body and and stores it inside body
body, _ := ioutil.ReadAll(r.Body)
//body is a json object, to convert it into go variable json.Unmarshal()is used ,
//which converts json object to User object of go.
err := json.Unmarshal(body, &technologyChoices)
var res model.TechnologyResponseResult
if err != nil {
res.Error = err.Error()
json.NewEncoder(w).Encode(res)
return
}
collection, err := db.TechnologyDBCollection()
if err != nil {
res.Error = err.Error()
json.NewEncoder(w).Encode(res)
return
}
_, err = collection.InsertOne(context.TODO(), technologyChoices)
if err != nil {
res.Error = "Error While Creating Technology choices, Try Again"
res.Result = "er"
json.NewEncoder(w).Encode(res)
return
}
res.Result = "success"
json.NewEncoder(w).Encode(res)
return
}
Based on your database structure, selected_solutions is an object containing string arrays:
type Technology struct {
ID primitive.ObjectID `json:"_id,omitempty" bson:"_id,omitempty"`
SoftwareType string `json:"software_type" bson:"software_type"`
SelectedSolutions map[string][]string `json:"selected_solutions" bson:"selected_solutions"`
}

How to write a response for kubernetes admission controller

I am trying to write a simple admission controller for pod naming (validation) but for some reason I am generating a wrong response.
Here is my code:
package main
import (
"fmt"
"encoding/json"
"io/ioutil"
"net/http"
"github.com/golang/glog"
// for Kubernetes
"k8s.io/api/admission/v1beta1"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"regexp"
)
type myValidServerhandler struct {
}
// this is the handler fuction from the HTTP server
func (gs *myValidServerhandler) serve(w http.ResponseWriter, r *http.Request) {
var Body []byte
if r.Body != nil {
if data , err := ioutil.ReadAll(r.Body); err == nil {
Body = data
}
}
if len(Body) == 0 {
glog.Error("Unable to retrive Body from API")
http.Error(w,"Empty Body", http.StatusBadRequest)
return
}
glog.Info("Received Request")
// this is where I make sure the request is for the validation prefix
if r.URL.Path != "/validate" {
glog.Error("Not a Validataion String")
http.Error(w,"Not a Validataion String", http.StatusBadRequest)
return
}
// in this part the function takes the AdmissionReivew and make sure in is in the right
// JSON format
arRequest := &v1beta1.AdmissionReview{}
if err := json.Unmarshal(Body, arRequest); err != nil {
glog.Error("incorrect Body")
http.Error(w, "incorrect Body", http.StatusBadRequest)
return
}
raw := arRequest.Request.Object.Raw
pod := v1.Pod{}
if err := json.Unmarshal(raw, &pod); err != nil {
glog.Error("Error Deserializing Pod")
return
}
// this is where I make sure the pod name contains the kuku string
podnamingReg := regexp.MustCompile(`kuku`)
if podnamingReg.MatchString(string(pod.Name)) {
return
} else {
glog.Error("the pod does not contain \"kuku\"")
http.Error(w, "the pod does not contain \"kuku\"", http.StatusBadRequest)
return
}
// I think the main problem is with this part of the code because the
// error from the events I getting in the Kubernetes namespace is that
// I am sending 200 without a body response
arResponse := v1beta1.AdmissionReview{
Response: &v1beta1.AdmissionResponse{
Result: &metav1.Status{},
Allowed: true,
},
}
// generating the JSON response after the validation
resp, err := json.Marshal(arResponse)
if err != nil {
glog.Error("Can't encode response:", err)
http.Error(w, fmt.Sprintf("couldn't encode response: %v", err), http.StatusInternalServerError)
}
glog.Infof("Ready to write response ...")
if _, err := w.Write(resp); err != nil {
glog.Error("Can't write response", err)
http.Error(w, fmt.Sprintf("cloud not write response: %v", err), http.StatusInternalServerError)
}
}
The code is working as expected except for a positive output (where the pod name meets the criteria)
there is another file with a main just grabbing the TLS files and starting the HTTP service.
so after a few digging I found what was wrong with my code
first this part
if podnamingReg.MatchString(string(pod.Name)) {
return
} else {
glog.Error("the pod does not contain \"kuku\"")
http.Error(w, "the pod does not contain \"kuku\"", http.StatusBadRequest)
return
}
by writing "return" twice I discarded the rest of the code and more so I haven't attached the request UID to the response UID and because I am using the v1 and not the v1beta1 I needed to adding the APIVersion in the response
so the rest of the code looks like :
arResponse := v1beta1.AdmissionReview{
Response: &v1beta1.AdmissionResponse{
Result: &metav1.Status{},
Allowed: false,
},
}
podnamingReg := regexp.MustCompile(`kuku`)
if podnamingReg.MatchString(string(pod.Name)) {
fmt.Printf("the pod %s is up to the name standard", pod.Name)
arResponse.Response.Allowed = true
}
arResponse.APIVersion = "admission.k8s.io/v1"
arResponse.Kind = arRequest.Kind
arResponse.Response.UID = arRequest.Request.UID
so I needed to add the 2 parts and make sure that in case the pod name is not up to standard then I need to return the right response

Stream file upload to AWS S3 using go

I want to stream a multipart/form-data (large) file upload directly to AWS S3 with as little memory and file disk footprint as possible. How can I achieve this? Resources online only explain how to upload a file and store it locally on the server.
You can use upload manager to stream the file and upload it, you can read comments in source code
you can also configure params to set the part size, concurrency & max upload parts, below is a sample code for reference.
package main
import (
"fmt"
"os"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
)
var filename = "file_name.zip"
var myBucket = "myBucket"
var myKey = "file_name.zip"
var accessKey = ""
var accessSecret = ""
func main() {
var awsConfig *aws.Config
if accessKey == "" || accessSecret == "" {
//load default credentials
awsConfig = &aws.Config{
Region: aws.String("us-west-2"),
}
} else {
awsConfig = &aws.Config{
Region: aws.String("us-west-2"),
Credentials: credentials.NewStaticCredentials(accessKey, accessSecret, ""),
}
}
// The session the S3 Uploader will use
sess := session.Must(session.NewSession(awsConfig))
// Create an uploader with the session and default options
//uploader := s3manager.NewUploader(sess)
// Create an uploader with the session and custom options
uploader := s3manager.NewUploader(sess, func(u *s3manager.Uploader) {
u.PartSize = 5 * 1024 * 1024 // The minimum/default allowed part size is 5MB
u.Concurrency = 2 // default is 5
})
//open the file
f, err := os.Open(filename)
if err != nil {
fmt.Printf("failed to open file %q, %v", filename, err)
return
}
//defer f.Close()
// Upload the file to S3.
result, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(myBucket),
Key: aws.String(myKey),
Body: f,
})
//in case it fails to upload
if err != nil {
fmt.Printf("failed to upload file, %v", err)
return
}
fmt.Printf("file uploaded to, %s\n", result.Location)
}
you can do this using minio-go :
n, err := s3Client.PutObject("bucket-name", "objectName", object, size, "application/octet-stream")
PutObject() automatically does multipart upload internally. Example
Another option is to mount the S3 bucket with goofys and then stream your writes to the mountpoint. goofys does not buffer the content locally so it will work fine with large files.
Was trying to do this with the aws-sdk v2 package so had to change the code of #maaz a bit. Am leaving it here for others -
type TokenMeta struct {
AccessToken string
SecretToken string
SessionToken string
BucketName string
}
// Create S3Client struct with the token meta and use it as a receiver for this method
func (s3Client S3Client) StreamUpload(fileToUpload string, fileKey string) error {
accessKey := s3Client.TokenMeta.AccessToken
secretKey := s3Client.TokenMeta.SecretToken
awsConfig, err := config.LoadDefaultConfig(context.TODO(),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(accessKey, secretKey, s3Client.TokenMeta.SessionToken)),
)
if err != nil {
return fmt.Errorf("error creating aws config: %v", err)
}
client := s3.NewFromConfig(awsConfig)
uploader := manager.NewUploader(client, func(u *manager.Uploader) {
u.PartSize = 5 * 1024 * 1024
u.BufferProvider = manager.NewBufferedReadSeekerWriteToPool(10 * 1024 * 1024)
})
f, err := os.Open(fileToUpload)
if err != nil {
return fmt.Errorf("failed to open fileToUpload %q, %v", fileToUpload, err)
}
defer func(f *os.File) {
err := f.Close()
if err != nil {
fmt.Errorf("error closing fileToUpload: %v", err)
}
}(f)
inputObj := &s3.PutObjectInput{
Bucket: aws.String(s3Client.TokenMeta.BucketName),
Key: aws.String(fileKey),
Body: f,
}
uploadResult, err := uploader.Upload(context.TODO(), inputObj)
if err != nil {
return fmt.Errorf("failed to uploadResult fileToUpload, %v", err)
}
fmt.Printf("%s uploaded to, %s\n", fileToUpload, uploadResult.Location)
return nil
}
I didn't try it but if i were you id try the multi part upload option .
you can read the doc multipartupload .
here is go example for multipart upload and multipart upload abort.