Loading DICOM from zip archive - zip.js

I am trying to load DICOMs from a DICOM Server. Loading a single file with the URL is working fine.
Now I want to load a whole series of DICOM Data. I get the data from the server with an HTTP-request as a zip archive.
I have tried to unzip the response with the zip.js library and pass the unziped text to the loader.parse function, to load the DICOMs as in the example "viewers_upload". But I get the error that the file could not be parsed.
Is there a way to load the data without the URL? Or how do I have to modify the example so that it will work for a zip archive?
This is the code from unzipping the file and passing it to the parser:
reader.getEntries(function(entries) {
if (entries.length) {
//getting one entry from the zipfile
entries[0].getData(new zip.ArrayBufferWriter(), function (dicom) {
loader.parse({url: "dicomName", dicom});
} , function (current, total) {
});
}
The error message is:
"dicomParser.readFixedString: attempt to read past end of buffer"
"Uncaught (in promise) parsers.dicom could not parse the file"
I think the problem might be with the returned datatype of the zipfile? Which type do I have to pass to the parse function? How has the structure of the data in the parser has to be? What length of the buffer does the parser expect?

Related

Flux File upload validation - file type

I am new to reactive programming. I am using flux for file upload. I need to make sure that all the files uploaded are of a specific type. If not I need to fail the request.
File.flatmap(input-> validate file())
.flatMAp(output->uploadtoazur())
My problem is when the second file is unacceptable type the first file has been processed. I want validateFile to scan all file and then do processing further
Basically if you want to process all the files at a time, not one-by-one, you should first collect them, since you're dealing with Flux. So, you can achieve it with collectList() on your Flux
And then, having List of your files, you can validate and process them. Here is an example of making your validation with handle()
On your Flux of files:
.collectList() // collect all files to List
.handle((files, sink) -> {
// validate all your files here, for example, using regular Stream API with allMatch()
...
if (allValid) {
// return these files if all files are valid
sink.next(files);
} else {
// throw Exception if some files are not valid
sink.error(new Exception("Some files are not valid"));
}
})
...
// further processing
This is one of the many possible ideas how to achieve what you want.
P.S. Actually you should have provided more code and format it properly.

How to convert CSV data to JSON object in react-native

I want to get CSV data from local file and convert it into JSON object(IOS and android). Is there any plugin available that can convert a CSV file to JSON in react-native
You should be fine using papaparse.
Install it via npm.
npm install papaparse
Parse a CSV string to json as follows.
import PapaParse from 'papaparse'
options = {} // dummy options
PapaParse.parse(str, options);
The variable str contains the CSV. The return value is a JSON object.
There exists a convenient wrapper around papaparse for react native called react-native-csv, but it doesn’t seem to be updated frequently.

Size of PDF breaks FastAPI using python-multipart?

I am trying to upload a PDF to FastAPI. After turning the PDF into a base64-blob and storing it in a txt-file, I POST this file to FastAPI using Postman.
This is my server-side code:
from fastapi import FastAPI, File, UploadFile
import base64
app = FastAPI()
#app.post("/uploadfile/")
async def create_upload_file(file: UploadFile = File(...)):
contents = await file.read()
blob = base64.b64decode(contents)
pdf = open('result.pdf','wb')
pdf.write(blob)
pdf.close()
return {"filename": file.filename}
This procedure works fine for a single-page PDF document of size 279KB (blob-size: 372KB), but it doesn't for a multi-page document of size 1.8MB (blob-size: 2.4MB).
When I try, I get the following WARNING and a 400 bad request response (along with the reseponse "detail": "There was an error parsing the body"):
"Did not find boundary character 55 at index 2"
I'm sure there must be an explanation for this behavior? Maybe it has something to do with async?
This is most likely an issue with saving the file using open().
For large files pdf.close() will execute before pdf.write() has finished saving all the contents of the file.
In order to ensure the whole file being written before it is closed, use with such as this:
with open('failed.pdf', 'wb') as outfile:
outfile.write(blob)
Using the with you will not need to close() after writing. with should also be considered best practice over saving the file into a local variable.

how to get more info than generic "Failed to parse JSON: No active field found.; ParsedString returned false; Could not parse value" on BigQuery load?

We're trying BigQuery for the first time, with data extracted from mongo in json format. I kept getting this generic parse error upon loading the file. But then I tried a smaller subset of the file, 20 records, and it loaded fine. This tells me it's not the general structure of the file, which I had originally thought was the problem. Is there any way to get more info on the parse error, such as the string of the record that it's trying to parse when it has this error?
I also tried using the max errors field, but that didn't work either.
This was via the website. I also tried it via the Google Cloud SDK command line 'bq load...' and got the same error.
This error is most likely caused by some of the JSON records not compying with table schema. It is not clear whether you used schema autodetect feature, or you are supplying schema for the load. But here is one example where such error could happen:
{ "a" : "1" }
{ "a" : { "b" : "2" } }
If you only have a few of these and they are for invalid records - you can automatically ignore them by using max_bad_records option for load job. More details at: https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-json

create a temp image file for upload

I have created a HTML5 image uploader using canvas.
I have the image data using
Canvas.toDataURL();
which is in the form
data:image/png;base64,<base64image string>
I sent the above data to php which will be used to upload the image to amazon server.
I normally pass the return value of
file_get_contents(path_to_file_to_upload);
to the amazon sdk and the work gets done.
Now how do i have the base64 image data converted into file_get_contents type data to upload the file.
I am not allowed to create a file in the server.Is there any way of creating a temp image and get the file_get_contents data from that temp file??
Pass the return value of base64_decode() instead of file_get_contents to the AWS SDK. file_get_contents loads a file into a string, base64_decode loads a base64 string and returns a string. Since you have a base64 string and not a file, you would call base64_decode.