I`m trying to upload image in a project in gitlab. According to documentation
it should not be a rocket science. I have tried to pass the image as url and base-64 representation. If I encode the base-64 URL there is a little progress - internal server error. Any ideas? Thanks!
Since you don't mentioned how you want upload..
With curl:
curl --request POST --header "PRIVATE-TOKEN: XXXXXXXXXX" --form "file=#dk.png" https://gitlab.example.com/api/v4/projects/5/uploads
5 is the id of your project.
gitlab api doc
According to this post :
Encoding an image file with base64
we could understand how to convert binary files to a base64 string including image, zip or pickle etc.
This is apart of my code. Hope it is helpful.
import gitlab
def test_upload_img_to_repo():
gitlab.Gitlab(url=HOST, private_token=PERSONAL_TOKEN)
project = gl.projects.get(project_id)
file_path = path.join(path.dirname(__file__), 'test_upload.txt')
img_path = path.join(path.dirname(__file__), 'logo.png')
data = {
'branch': 'master',
'commit_message': 'blah blah blah',
'actions': [
{
'action': 'update',
'file_path': 'upload.txt',
'content': open(file_path).read(),
},
{
# Binary files need to be base64 encoded
'action': 'create',
'file_path': '/folder/logo.png',
'content': base64.b64encode(open(img_path,'rb').read()).decode(),
'encoding': 'base64',
}
]
}
commit = project.commits.create(data)
print(commit)
Related
I'm trying to upload images to a Digital Ocean space from the browser. These images should be public. I'm able to upload the images successfully.
However, though the ACL is set to public-read, the uploaded files are always private.
I know they're private because a) the dashboard says that the permissions are "private", and b) because the public urls don't work, and c) manually changing the permissions to "public" in the dashboard fixes everything.
Here's the overall process I'm using.
Create a pre-signed URL on the backend
Send that url to the browser
Upload the image to that pre-signed url
Any ideas why the images aren't public?
Code
The following examples are written in TypeScript and use AWS's v3 SDK.
Backend
This generates the pre-signed url to upload a file.
import { S3Client, PutObjectCommand } from '#aws-sdk/client-s3'
import { getSignedUrl } from '#aws-sdk/s3-request-presigner'
const client = new S3Client({
region: 'nyc3',
endpoint: 'https://nyc3.digitaloceanspaces.com',
credentials: {
accessKeyId: process.env.DIGITAL_OCEAN_SPACES_KEY,
secretAccessKey: process.env.DIGITAL_OCEAN_SPACES_SECRET,
},
})
const command = new PutObjectCommand({
ACL: 'public-read',
Bucket: 'bucket-name',
Key: fileName,
ContentType: mime,
})
const url = await getSignedUrl(client, command)
The pre-signed url is then sent to the browser.
Frontend
This is the code on the client to actually upload the file to Digital Ocean. file is a File object.
const uploadResponse = await fetch(url, {
headers: {
'Content-Type': file.type,
'Cache-Control': 'public,max-age=31536000,immutable',
},
body: file,
method: 'PUT',
})
Metadata
AWS SDK: 3.8.0
Turns out that for Digital Ocean, you also need to set the public-read ACL as a header in the put request.
//front-end
const uploadResponse = await fetch(url, {
headers: {
'Content-Type': file.type,
'Cache-Control': 'public,max-age=31536000,immutable',
'x-amz-acl': 'public-read', // add this line
},
body: file,
method: 'PUT',
})
I don't have the reputation to comment, hence adding a response. Thank you #Nick ... this is one of the few working examples of code I have seen for DigitalOcean pre-signed url. While the official DigitalOcean description here mentions Content-Type is needed for uploading with pre-signed urls, there is no example code.
Another mistake that prevented me from uploading a file using pre-signed URLs in DigitalOcean was using 'Content-Type':'multipart/form-data' and FormData().
After seeing this post, I followed #Nick's suggestion of using a File() object and 'Content-Type':'<relevant_mime>'. Then, the file upload worked like a charm. This is also not covered in official docs.
Try this to force ACL to Public in Digital Ocean Spaces:
s3cmd --access_key=YOUR_ACCESS_KEY --secret_key=YOUR_SECRET_KEY --host=YOUR_BUCKET_REGION.digitaloceanspaces.com --host-bucket=YOUR_BUCKET_NAME.YOUR_BUCKET_REGION.digitaloceanspaces.com --region=YOUR_BUCKET_REGION setacl s3://YOUR_BUCKET_NAME --acl-public
I'm now using the mock server from https://www.mock-server.com/ and run it in a docker container.
Now I would like to let the response change as request body changes. I've looked up for dynamic response on official webiste for a while, but have no idea on how to extract specific data from request body.
curl -v -X PUT "http://localhost:1080/mockserver/expectation" -d '{
"httpRequest": {
"path": "/some/path"
},
"httpResponseTemplate": {
"template": "return { statusCode: 200, body: request.body };",
"templateType": "JAVASCRIPT"
}
}'
The code above is to create a simple expectation, which will response the request body. For example,
$curl http://localhost:1080/some/path -d '{"name":"welly"}'
{"name":"welly"} //response
Now I want to change the way of giving the response. For example, I would like to input {a:A, b:B} and get the response {a:B, b:A}.
So, how to modify the json data in request body and give it to response? I guess there are some methods to extract specific data from json file, or modify json data, etc. Also, I want to know how to better search the detailed information, since the official website and full REST API json specification
(https://app.swaggerhub.com/apis/jamesdbloom/mock-server-openapi/5.11.x#/expectation/put_expectation) is hard for me to understand.
Thanks a lot!
I needed to do this as well, I think I got this to work have a look at my curl example for the expectation I hope that helps:
curl -v -X PUT "http://localhost:1080/mockserver/expectation" -d '{
"httpRequest": {
"path": "/api/fun",
"method": "POST"
},
"httpResponseTemplate": {
"template": "
req = JSON.parse(request.body.string)
rid = req[\"id\"]
return { statusCode: 201, body: {new_id: rid} }
",
"templateType": "JAVASCRIPT"
}}'
After you do this if send:
curl -X POST http://localhost:1080/api/fun --data '{"id": "test_1"}'
it should return:
{ "new_id" : "test_1" }
Javascript templating supported via Nashorn engine and thus will no longer be available from Java 15. Here is the note from the Mockserver docs.
From Java 15 Nashorn is no longer part of the JDK but it is available as a separate library that requires Java 11+. Once MockServer minimum Java version is 11 then this separate library will be used.
I've tried to write equivalent karate script for below curl request
curl -X PUT \
'http://localhost:8055/uploadfile' \
-H 'content-type: multipart/form-data;' \
-F code=#/Users/test/Downloads/Next.zip
Tried karate script
Given path 'uploadfile'
#Given header Content-Type = 'multipart/form-data'
And form field code = '/Users/test/Downloads/Next.zip'
#And multipart file code = { read: '/Users/test/Downloads/Next.zip' , contentType: 'application/zip' }
When method PUT
Then status 200
Am I doing something mistake here (tried different things)? Still not getting expected API response.
FYI : I've got that curl command from postman and it is working fine.
It is hard to tell with the limited info you have provided. Try this:
Given url 'http://localhost:8055/uploadfile'
And multipart file code = { read: 'file:/Users/test/Downloads/Next.zip', filename: 'Next.zip', contentType: 'application/zip' }
When method put
If you are still stuck follow this process: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue (or use postman ;)
I'm trying to semd a base64 encoded fimage to the ocr.space api following https://ocr.space/blog/2016/10/ocr-api-supports-base64.html and https://ocr.space/ocrapi . You can see my Postman settings in the screenshot.
However when I submit it I see:
"ErrorDetails": "Not a valid base64 image. The accepted base64 image format is 'data:<content_type>;base64,<base64_image_content>'. Where 'content_type' like 'image/png' or 'image/jpg' or 'application/pdf' or any other supported type.",
Using Postman I have created the following curl request https://pastebin.com/ajfC3a5r
What am I doing wrong
How about this modification?
Modification points:
In your base64 data at here, \n is included.
When I tried to decode the base64 data after \n was removed from the base64 data, it was found that the data was PDF file. The content type was not image/png.
By these, I think that the error which was shown at your question occurs. So please modify as follows.
Modified curl command:
Please remove \n from the base64 data.
About the header of base64 data, please modify from data:image/png;base64,##### base64 data ##### to data:application/pdf;base64,##### base64 data #####.
When above modifications were done, how about using the following curl command?
curl -X POST \
https://api.ocr.space/parse/image \
-H "apikey:#####" \
-F "language=eng" \
-F "isOverlayRequired=false" \
-F "iscreatesearchablepdf=false" \
-F "issearchablepdfhidetextlayer=false" \
-F "base64Image=data:application/pdf;base64,##### base64 data #####"
Result:
When above sample is run, the following value is returned.
{
"ParsedResults": [
{
"TextOverlay": {
"Lines": [],
"HasOverlay": false,
"Message": "Text overlay is not provided as it is not requested"
},
"TextOrientation": "0",
"FileParseExitCode": 1,
"ParsedText": "##### text data #####",
"ErrorMessage": "",
"ErrorDetails": ""
}
],
"OCRExitCode": 1,
"IsErroredOnProcessing": false,
"ProcessingTimeInMilliseconds": "123",
"SearchablePDFURL": "Searchable PDF not generated as it was not requested."
}
Note:
In my environment, I could confirm that the API worked using above modified base64 data and sample curl.
The curl sample including the modified base64 data is this.
If you use this, please set your API key.
Or you can also directly use the image file which is not base64 data. The sample curl is
curl -X POST https://api.ocr.space/parse/image -H "apikey:#####" -F "file=#sample.png"
I'm new to Elasticsearch and I read here https://www.elastic.co/guide/en/elasticsearch/plugins/master/mapper-attachments.html that the mapper-attachments plugin is deprecated in elasticsearch 5.0.0.
I now try to index a pdf file with the new ingest-attachment plugin and upload the attachment.
What I've tried so far is
curl -H 'Content-Type: application/pdf' -XPOST localhost:9200/test/1 -d #/cygdrive/c/test/test.pdf
but I get the following error:
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"not_x_content_exception","reason":"Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"}},"status":400}
I would expect that the pdf file will be indexed and uploaded. What am I doing wrong?
I also tested Elasticsearch 2.3.3 but the mapper-attachments plugin is not valid for this version and I don't want to use any older version of Elasticsearch.
You need to make sure you have created your ingest pipeline with:
PUT _ingest/pipeline/attachment
{
"description" : "Extract attachment information",
"processors" : [
{
"attachment" : {
"field" : "data",
"indexed_chars" : -1
}
}
]
}
Then you can make a PUT not POST to your index using the pipeline you've created.
PUT my_index/my_type/my_id?pipeline=attachment
{
"data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0="
}
In your example, should be something like:
curl -H 'Content-Type: application/pdf' -XPUT localhost:9200/test/1?pipeline=attachment -d #/cygdrive/c/test/test.pdf
Remembering that the PDF content must be base64 encoded.
Hope it will help you.
Edit 1
Please make sure to read these, it helped me a lot:
Elastic Ingest
Ingest Plugin
Ingest Presentation
Edit 2
Also, you must have ingest-attachment plugin installed.
./bin/elasticsearch-plugin install ingest-attachment
Edit 3
Please, before you create your ingest processor (attachment), create your index, map with the fields you will use and make sure you have the data field in your map (same name of the "field" in your attachment processor), so ingest will process and fullfill your data field with your pdf content.
I inserted the indexed_chars option in the ingest processor, with -1 value, so you can index large pdf files.
Edit 4
The mapping should be something like that:
PUT my_index
{
"mappings" : {
"my_type" : {
"properties" : {
"attachment.data" : {
"type": "text",
"analyzer" : "brazilian"
}
}
}
}
}
In this case, I use the brazilian filter, but you can remove that or use your own.