Invalid characters in fileName, API response after sending Zip buffer - api

I'm trying to send the buffer of a zipped dir to an API that expects that, but I'm receiving a kind of weird response:
{
"message": "invalid characters in fileName: store\\\\blocks/\\nError: invalid characters in fileName: store\\\\blocks/\\n at /usr/local/data/service/node_modules/yauzl/index.js:352:83\\n at /usr/local/data/service/node_modules/yauzl/index.js:473:5\\n at /usr/local/data/service/node_modules/fd-slicer/index.js:32:7\\n at FSReqCallback.wrapper [as oncomplete] (fs.js:520:5)"
}
I'm imagining now it's something related to the final file name, but being a buffer I have no idea if it affects or even exists. But if it has, I would love to know how to change it, or if someone has already passed for something like this, how to solve it!
Some code here:
Zip + send
// ? Zip the file, using the zip utils.
let file: Vec<u8> = gzip::file::zip(path);
// ? Send the file to the builder.
match builder::link(file, token) {
Ok(res) => {
println!("{:?}", res.text());
}
Err(e) => {
println!("{:?}", e);
}
}
API
let client = reqwest::blocking::Client::new(); // Create a new HTTP blocking client.
return client // Setup the request.
.post(Routes::assemble(Link)) // Define the endpoint.
.header(ACCEPT, "application/json, text/plain, */*") // Define the headers.
.header(ACCEPT_ENCODING, "gzip") // More headers.
.header(CONTENT_LENGTH, file.len()) // And more headers.
.header(CONTENT_TYPE, "application/octet-stream") // Guess what.
.header(AUTHORIZATION, format!("Bearer {}", token)) // One more.
.body(file) // And finally the body.
.send(); // Just wrap it up and send it.
Resumed zipping function
let mut buf = Vec::new(); // Responsible for handling the buffer and the bytes in it.
let mut cursor = Cursor::new(&mut buf); // Responsible for the cursor.
let mut zip = ZipWriter::new(&mut cursor); // Responsible for the zip writer.
let options = FileOptions::default()
.compression_method(zip::CompressionMethod::Stored)
.unix_permissions(0o755);
let mut buffer = Vec::new();
for entry in it {
let path = entry.path(); // Get the file path
if path.is_file() {
zip.start_file(name.to_str().unwrap(), options).unwrap();
let mut f = File::open(path).unwrap();
f.read_to_end(&mut buffer).unwrap();
zip.write_all(&*buffer).unwrap();
buffer.clear();
} else if name.as_os_str().len() != 0 {
zip.add_directory(name.to_str().unwrap(), options).unwrap();
}
}
zip.finish().unwrap(); // Close the file in the zip archive
drop(zip);
return buf;

Related

Swift SQL POST method - Why does it always create a response with <null> as the string?

I'm trying to set up POST requests to connect my Swift Project to my SQL server. I found a lot of good resources on how to create the POST requests, and my code seems to create the request fine. However, it always returns a null string instead of the desired output string. How can I get the request to submit with the correct string?
My POST function:
func addString() {
// declare the parameter as a dictionary that contains string as key and value combination. considering inputs are valid
let parameters: [String: Any] = ["textstring":"Testing..."]
// create the url with URL
let url = URL(string: "http://localhost:3000/strings")! // change server url accordingly
// create the session object
let session = URLSession.shared
// now create the URLRequest object using the url object
var request = URLRequest(url: url)
request.httpMethod = "POST" //set http method as POST
// add headers for the request
request.addValue("application/json", forHTTPHeaderField: "Content-Type") // change as per server requirements
do {
// convert parameters to Data and assign dictionary to httpBody of request
request.httpBody = try JSONSerialization.data(withJSONObject: parameters, options: .prettyPrinted)
} catch let error {
print(error.localizedDescription)
return
}
// create dataTask using the session object to send data to the server
let task = session.dataTask(with: request) { data, response, error in
if let error = error {
print("Post Request Error: \(error.localizedDescription)")
return
}
// ensure there is valid response code returned from this HTTP response
guard let httpResponse = response as? HTTPURLResponse,
(200...299).contains(httpResponse.statusCode)
else {
print("Invalid Response received from the server")
return
}
// ensure there is data returned
guard let responseData = data else {
print("nil Data received from the server")
return
}
do {
// create json object from data or use JSONDecoder to convert to Model struct
if let jsonResponse = try JSONSerialization.jsonObject(with: responseData, options: .mutableContainers) as? [String: Any] {
print(jsonResponse)
// handle json response
} else {
print("data maybe corrupted or in wrong format")
throw URLError(.badServerResponse)
}
} catch let error {
print(error.localizedDescription)
}
}
// perform the task
task.resume()
}
My Object TextString that emulates the SQL table (Table "strings" has two columns: "id" (SERIAL PRIMARY KEY) and "textstring" (VARCHAR(255))
struct TextString: Codable, Hashable, Identifiable {
var id: Int
var textString: String
enum CodingKeys: String, CodingKey {
case id = "string_id"
case textString = "textstring"
}}
My response output:
["string_id": 28, "textstring": <null>]
I'm thinking that the issue is with the serialization but the code is taken almost directly from a youtube video where the test cases are shown and they work. Does anyone know why the output returns "textstring" as null and not "Testing..."?
Thanks in advance!

Redirect stdio over TLS in Rust

I am trying to replicate the "-e" option in ncat to redirect stdio in Rust to a remote ncat listener.
I can do it over TcpStream by using dup2 and then executing the "/bin/sh" command in Rust. However, I do not know how to do it over TLS as redirection seems to require file descriptors, which TlsStream does not seem to provide.
Can anyone advise on this?
EDIT 2 Nov 2020
Someone in the Rust forum has kindly shared a solution with me (https://users.rust-lang.org/t/redirect-stdio-pipes-and-file-descriptors/50751/8) and now I am trying to work on how to redirect the stdio over the TLS connection.
let mut command_output = std::process::Command::new("/bin/sh")
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
.expect("cannot execute command");
let mut command_stdin = command_output.stdin.unwrap();
println!("command_stdin {}", command_stdin.as_raw_fd());
let copy_stdin_thread = std::thread::spawn(move || {
io::copy(&mut io::stdin(), &mut command_stdin)
});
let mut command_stdout = command_output.stdout.unwrap();
println!("command_stdout {}", command_stdout.as_raw_fd());
let copy_stdout_thread = std::thread::spawn(move || {
io::copy(&mut command_stdout, &mut io::stdout())
});
let command_stderr = command_output.stderr.unwrap();
println!("command_stderr {}", command_stderr.as_raw_fd());
let copy_stderr_thread = std::thread::spawn(move || {
io::copy(&mut command_stderr, &mut io::stderr())
});
copy_stdin_thread.join().unwrap()?;
copy_stdout_thread.join().unwrap()?;
copy_stderr_thread.join().unwrap()?;
This question and this answer are not specific to Rust.
You noticed the important fact that the I/O of the redirected process must be file descriptors.
One possible solution in your application is
use socketpair(PF_LOCAL, SOCK_STREAM, 0, fd)
this provides two connected bidirectional file descriptors
use dup2() on one end of this socketpair for the I/O of the redirected process (as you would do with an unencrypted TCP stream)
watch both the other end and the TLS stream (in a select()-like manner for example) in order to
receive what becomes available from the socketpair and send it to the TLS stream,
receive what becomes available from the TLS stream and send it to the socketpair.
Note that select() on a TLS stream (its underlying file descriptor, actually) is a bit tricky because some bytes may already have been received (on its underlying file descriptor) and decrypted in the internal buffer while not yet consumed by the application.
You have to ask the TSL stream if its reception buffer is empty before trying a new select() on it.
Using an asynchronous or threaded solution for this watch/recv/send loop is probably easier than relying on a select()-like solution.
edit, after the edition in the question
Since you have now a solution relying on three distinct pipes you can forget everything about socketpair().
The invocation of std::io::copy() in each thread of your example is a simple loop that receives some bytes from its first parameter and sends them to the second.
Your TlsStream is probably a single structure performing all the encrypted I/O operations (sending as well as receiving) thus you will not be able to provide a &mut reference on it to your multiple threads.
The best is probably to write your own loop trying to detect new incoming bytes and then dispatch them to the appropriate destination.
As explained ebove, I would use select() for that.
Unfortunately in Rust, as far as I know, we have to rely on low-level features as libc for that (there may be other high level solutions I am not aware of in the async world...).
I produced a (not so) minimal example below in order to show the main idea; it is certainly far from being perfect, so « handle with care » ;^)
(it relies on native-tls and libc)
Accessing it from openssl gives this
$ openssl s_client -connect localhost:9876
CONNECTED(00000003)
Can't use SSL_get_servername
...
Extended master secret: yes
---
hello
/bin/sh: line 1: hello: command not found
df
Filesystem 1K-blocks Used Available Use% Mounted on
dev 4028936 0 4028936 0% /dev
run 4038472 1168 4037304 1% /run
/dev/sda5 30832548 22074768 7168532 76% /
tmpfs 4038472 234916 3803556 6% /dev/shm
tmpfs 4096 0 4096 0% /sys/fs/cgroup
tmpfs 4038472 4 4038468 1% /tmp
/dev/sda6 338368556 219588980 101568392 69% /home
tmpfs 807692 56 807636 1% /run/user/9223
exit
read:errno=0
fn main() {
let args: Vec<_> = std::env::args().collect();
let use_simple = args.len() == 2 && args[1] == "s";
let mut file = std::fs::File::open("server.pfx").unwrap();
let mut identity = vec![];
use std::io::Read;
file.read_to_end(&mut identity).unwrap();
let identity =
native_tls::Identity::from_pkcs12(&identity, "dummy").unwrap();
let listener = std::net::TcpListener::bind("0.0.0.0:9876").unwrap();
let acceptor = native_tls::TlsAcceptor::new(identity).unwrap();
let acceptor = std::sync::Arc::new(acceptor);
for stream in listener.incoming() {
match stream {
Ok(stream) => {
let acceptor = acceptor.clone();
std::thread::spawn(move || {
let stream = acceptor.accept(stream).unwrap();
if use_simple {
simple_client(stream);
} else {
redirect_shell(stream);
}
});
}
Err(_) => {
println!("accept failure");
break;
}
}
}
}
fn simple_client(mut stream: native_tls::TlsStream<std::net::TcpStream>) {
let mut buffer = [0_u8; 100];
let mut count = 0;
loop {
use std::io::Read;
if let Ok(sz_r) = stream.read(&mut buffer) {
if sz_r == 0 {
println!("EOF");
break;
}
println!(
"received <{}>",
std::str::from_utf8(&buffer[0..sz_r]).unwrap_or("???")
);
let reply = format!("message {} is {} bytes long\n", count, sz_r);
count += 1;
use std::io::Write;
if stream.write_all(reply.as_bytes()).is_err() {
println!("write failure");
break;
}
} else {
println!("read failure");
break;
}
}
}
fn redirect_shell(mut stream: native_tls::TlsStream<std::net::TcpStream>) {
// start child process
let mut child = std::process::Command::new("/bin/sh")
.stdin(std::process::Stdio::piped())
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped())
.spawn()
.expect("cannot execute command");
// access useful I/O and file descriptors
let stdin = child.stdin.as_mut().unwrap();
let stdout = child.stdout.as_mut().unwrap();
let stderr = child.stderr.as_mut().unwrap();
use std::os::unix::io::AsRawFd;
let stream_fd = stream.get_ref().as_raw_fd();
let stdout_fd = stdout.as_raw_fd();
let stderr_fd = stderr.as_raw_fd();
// main send/recv loop
use std::io::{Read, Write};
let mut buffer = [0_u8; 100];
loop {
// no need to wait for new incoming bytes on tcp-stream
// if some are already decoded in the tls-stream
let already_buffered = match stream.buffered_read_size() {
Ok(sz) if sz > 0 => true,
_ => false,
};
// prepare file descriptors to be watched for by select()
let mut fdset =
unsafe { std::mem::MaybeUninit::uninit().assume_init() };
let mut max_fd = -1;
unsafe { libc::FD_ZERO(&mut fdset) };
unsafe { libc::FD_SET(stdout_fd, &mut fdset) };
max_fd = std::cmp::max(max_fd, stdout_fd);
unsafe { libc::FD_SET(stderr_fd, &mut fdset) };
max_fd = std::cmp::max(max_fd, stderr_fd);
if !already_buffered {
// see above
unsafe { libc::FD_SET(stream_fd, &mut fdset) };
max_fd = std::cmp::max(max_fd, stream_fd);
}
// block this thread until something new happens
// on these file-descriptors (don't wait if some bytes
// are already decoded in the tls-stream)
let mut zero_timeout =
unsafe { std::mem::MaybeUninit::zeroed().assume_init() };
unsafe {
libc::select(
max_fd + 1,
&mut fdset,
std::ptr::null_mut(),
std::ptr::null_mut(),
if already_buffered {
&mut zero_timeout
} else {
std::ptr::null_mut()
},
)
};
// this thread is not blocked any more,
// try to handle what happened on the file descriptors
if unsafe { libc::FD_ISSET(stdout_fd, &mut fdset) } {
// something new happened on stdout,
// try to receive some bytes an send them through the tls-stream
if let Ok(sz_r) = stdout.read(&mut buffer) {
if sz_r == 0 {
println!("EOF detected on stdout");
break;
}
if stream.write_all(&buffer[0..sz_r]).is_err() {
println!("write failure on tls-stream");
break;
}
} else {
println!("read failure on process stdout");
break;
}
}
if unsafe { libc::FD_ISSET(stderr_fd, &mut fdset) } {
// something new happened on stderr,
// try to receive some bytes an send them through the tls-stream
if let Ok(sz_r) = stderr.read(&mut buffer) {
if sz_r == 0 {
println!("EOF detected on stderr");
break;
}
if stream.write_all(&buffer[0..sz_r]).is_err() {
println!("write failure on tls-stream");
break;
}
} else {
println!("read failure on process stderr");
break;
}
}
if already_buffered
|| unsafe { libc::FD_ISSET(stream_fd, &mut fdset) }
{
// something new happened on the tls-stream
// (or some bytes were already buffered),
// try to receive some bytes an send them on stdin
if let Ok(sz_r) = stream.read(&mut buffer) {
if sz_r == 0 {
println!("EOF detected on tls-stream");
break;
}
if stdin.write_all(&buffer[0..sz_r]).is_err() {
println!("write failure on stdin");
break;
}
} else {
println!("read failure on tls-stream");
break;
}
}
}
let _ = child.wait();
}

HCL Domino AppDevPack - Problem with writing Rich Text

I use the code proposed as an example in the documentation for Domino AppDev Pack 1.0.4 , the only difference is the reading of a text file (body.txt) as a buffer, this file containing only simple long text (40Ko).
When it is executed, the document is created in the database and the rest of the code does not return an error.
But finally, the rich text field was not added to the document.
Here the response returned:
response: {"fields":[{"fieldName":"Body","unid":"8EA69129BEECA6DEC1258554002F5DCD","error":{"name":"ProtonError","code":65577,"id":"RICH_TEXT_STREAM_CORRUPT"}}]}
My goal is to write very long text (more than 64 Ko) in a rich text field. I use in the example a text file for the buffer but it could be later something like const buffer = Buffer.from ('very long text ...')
Is this the right way or does it have to be done differently ?
I'm using a Windows system with IBM Domino (r) Server (64 Bit), Release 10.0.1FP4 and AppDevPack 1.0.4.
Thank you in advance for your help
Here's code :
const write = async (database) => {
let writable;
let result;
try {
// Create a document with subject write-example-1 to hold rich text
const unid = await database.createDocument({
document: {
Form: 'RichDiscussion',
Title: 'write-example-1',
},
});
writable = await database.bulkCreateRichTextStream({});
result = await new Promise((resolve, reject) => {
// Set up event handlers.
// Reject the Promise if there is a connection-level error.
writable.on('error', (e) => {
reject(e);
});
// Return the response from writing when resolving the Promise.
writable.on('response', (response) => {
console.log("response: " + JSON.stringify(response));
resolve(response);
});
// Indicates which document and item name to use.
writable.field({ unid, fieldName: 'Body' });
let offset = 0;
// Assume for purposes of this example that we buffer the entire file.
const buffer = fs.readFileSync('/driver/body.txt');
// When writing large amounts of data, it is necessary to
// wait for the client-side to complete the previous write
// before writing more data.
const writeData = () => {
let draining = true;
while (offset < buffer.length && draining) {
const remainingBytes = buffer.length - offset;
let chunkSize = 16 * 1024;
if (remainingBytes < chunkSize) {
chunkSize = remainingBytes;
}
draining = writable.write(buffer.slice(offset, offset + chunkSize));
offset += chunkSize;
}
if (offset < buffer.length) {
// Buffer is not draining. Whenever the drain event is emitted
// call this function again to write more data.
writable.once('drain', writeData);
}
};
writeData();
writable = undefined;
});
} catch (e) {
console.log(`Unexpected exception ${e.message}`);
} finally {
if (writable) {
writable.end();
}
}
return result;
};
As of appdev pack 1.0.4, the rich text stream accepts writing data of valid rich text cd format, in the LMBCS character set. We are currently working on a library to help you write valid rich text data to the stream.
I'd love to hear more about your use cases, and we're excited you're already poking around the feature! If you can join the openntf slack channel, I usually hang out there.

Upload Multipart files Completion block

I'm using alamofire5 beta and I can't find the encodingResult that was used in previous versions.
This is my code function:
static func postComplexPictures(complexId: String, pictures: [UIImage], completion:#escaping (DataResponse<Data?>) -> Void) {
let url = K.ProductionServer.baseURL + "/api/v1/complex/" + complexId + "/pictures"
let token: String = UserDefaults.standard.string(forKey: "Token") ?? ""
let bearerToken: String = "Bearer " + token
let bundleId: String = Bundle.footballNow.bundleIdentifier!
let headers: HTTPHeaders = [HTTPHeaderField.authentication.rawValue: bearerToken,
HTTPHeaderField.contentType.rawValue: ContentType.multipart.rawValue,
HTTPHeaderField.bundleIdentifier.rawValue: bundleId]
AF.upload(multipartFormData: { (multipartFormData) in
for image in pictures {
if let imageData = UIImageJPEGRepresentation(image, 0.5) {
multipartFormData.append(imageData, withName: "pictures[\(index)]", fileName: "picture", mimeType: "image/jpeg")
}
}
}, usingThreshold: UInt64.init(), to: url, method: .post, headers: headers).response(completionHandler: completion)
}
The .response actually calls my block, but it returns too quick for the images to be uploaded and I don't have a reference to the uploading status of the images.
Any thoughts?
Thanks!
I'm happy to say that there is no encoding result in Alamofire 5! Instead, failures in multipart encoding, and the async work required to encode it, are now part of the same request path as everything else. So you'll get any errors in your response calls, just like any other request. So if your request is finishing quickly, check the error, as the multipart encoding may have failed.

What is the best variant for appending a new line in a text file?

I am using this code to append a new line to the end of a file:
let text = "New line".to_string();
let mut option = OpenOptions::new();
option.read(true);
option.write(true);
option.create(true);
match option.open("foo.txt") {
Err(e) => {
println!("Error");
}
Ok(mut f) => {
println!("File opened");
let size = f.seek(SeekFrom::End(0)).unwrap();
let n_text = match size {
0 => text.clone(),
_ => format!("\n{}", text),
};
match f.write_all(n_text.as_bytes()) {
Err(e) => {
println!("Write error");
}
Ok(_) => {
println!("Write success");
}
}
f.sync_all();
}
}
It works, but I think it's too difficult. I found option.append(true);, but if I use it instead of option.write(true); I get "Write error".
Using OpenOptions::append is the clearest way to append to a file:
use std::fs::OpenOptions;
use std::io::prelude::*;
fn main() {
let mut file = OpenOptions::new()
.write(true)
.append(true)
.open("my-file")
.unwrap();
if let Err(e) = writeln!(file, "A new line!") {
eprintln!("Couldn't write to file: {}", e);
}
}
As of Rust 1.8.0 (commit) and RFC 1252, append(true) implies write(true). This should not be a problem anymore.
Before Rust 1.8.0, you must use both write and append — the first one allows you to write bytes into a file, the second specifies where the bytes will be written.