Error handling after doing strtol - error-handling

I am trying to read several numbers on stdin, one number on each line. I want to ignore any trailing text after number and strings if any on any line. To implement this I used the below code:
while (getline(cin, str)) {
num = strtol(str.c_str(), NULL, 0);
if (errno != ERANGE && errno != EINVAL) {
arr[i++] = num;
req_pages_size++;
cout << arr[i-1] << "\t";
}
str.clear();
}
ISSUE: After unsuccessful conversion, errno doesn't get updated with an error value for successful conversion case. It's value remains the same for previous calls which was unsuccessful.
Please let me know how to handle this issue?

The manpage of errno states:
errno is never set to zero by any system call or library function
but you can set it to zero as stated in the manpage of strtol()
the calling program should set errno to 0 before the call, and then determine if an error occurred by checking whether errno has a nonzero value after the call.
so just add
errno = 0;
before calling strtol()

Related

Can you retry a Zig function call when it returns an error?

Zig's documentation shows different methods of error handling including bubbling the error value up the call stack, catching the error and using a default value, panicking, etc.
I'm trying to figure out how to retry functions which provide error values.
For example, in the below snippet from ziglearn, is there a way to retry the nextLine function in the event that a user enters greater than 100 characters?
fn nextLine(reader: anytype, buffer: []u8) !?[]const u8 {
var line = (try reader.readUntilDelimiterOrEof(
buffer,
'\n',
)) orelse return null;
// trim annoying windows-only carriage return character
if (#import("builtin").os.tag == .windows) {
return std.mem.trimRight(u8, line, "\r");
} else {
return line;
}
}
test "read until next line" {
const stdout = std.io.getStdOut();
const stdin = std.io.getStdIn();
try stdout.writeAll(
\\ Enter your name:
);
var buffer: [100]u8 = undefined;
const input = (try nextLine(stdin.reader(), &buffer)).?;
try stdout.writer().print(
"Your name is: \"{s}\"\n",
.{input},
);
}
This should do what you want.
const input = while (true) {
const x = nextLine(stdin.reader(), &buffer) catch continue;
break x;
} else unreachable; // (see comment) fallback value could be an empty string maybe?
To break it down:
instead of try, you can use catch to do something in the case of an error, and we're restarting the loop in this case.
while loops can also be used as expressions and you can break from them with a value. they also need an else branch, in case the loop ends without breaking away from it. in our case this is impossible since we're going to loop forever until nextLine suceeds, but if we had another exit condition (like a limit on the number of retries), then we would need to provide a "fallback" value, instead of unreachable.
You can also make it a one-liner:
const input = while (true) break nextLine(stdin.reader(), &buffer) catch continue else unreachable;
Hopefully the new self-hosted compiler will be able to pick up on the fact that the else branch is not necessary, since we're going to either break with a value loop forever.

Why perror() changes orientation of stream when it is redirected?

The standard says that:
The perror() function shall not change the orientation of the standard error stream.
This is the implementation of perror() in GNU libc.
Following are the tests when stderr is wide-oriented, multibyte-oriented and not oriented, prior to calling perror().
Tests 1) and 2) are OK. The issue is in test 3).
1) stderr is wide-oriented:
#include <stdio.h>
#include <wchar.h>
#include <errno.h>
int main(void)
{
fwide(stderr, 1);
errno = EINVAL;
perror("");
int x = fwide(stderr, 0);
printf("fwide: %d\n",x);
return 0;
}
$ ./a.out
Invalid argument
fwide: 1
$ ./a.out 2>/dev/null
fwide: 1
2) stderr is multibyte-oriented:
#include <stdio.h>
#include <wchar.h>
#include <errno.h>
int main(void)
{
fwide(stderr, -1);
errno = EINVAL;
perror("");
int x = fwide(stderr, 0);
printf("fwide: %d\n",x);
return 0;
}
$ ./a.out
Invalid argument
fwide: -1
$ ./a.out 2>/dev/null
fwide: -1
3) stderr is not oriented:
#include <stdio.h>
#include <wchar.h>
#include <errno.h>
int main(void)
{
printf("initial fwide: %d\n", fwide(stderr, 0));
errno = EINVAL;
perror("");
int x = fwide(stderr, 0);
printf("fwide: %d\n", x);
return 0;
}
$ ./a.out
initial fwide: 0
Invalid argument
fwide: 0
$ ./a.out 2>/dev/null
initial fwide: 0
fwide: -1
Why perror() changes orientation of stream if it is redirected? Is it proper behavior?
How does this code work? What is this __dup trick all about?
TL;DR: Yes, it's a bug in glibc. If you care about it, you should report it.
The quoted requirement that perror not change the stream orientation is in Posix, but does not seem to be required by the C standard itself. However, Posix seems quite insistent that the orientation of stderr not be changed by perror, even if stderr is not yet oriented. XSH 2.5 Standard I/O Streams:
The perror(), psiginfo(), and psignal() functions shall behave as described above for the byte output functions if the stream is already byte-oriented, and shall behave as described above for the wide-character output functions if the stream is already wide-oriented. If the stream has no orientation, they shall behave as described for the byte output functions except that they shall not change the orientation of the stream.
And glibc attempts to implement Posix semantics. Unfortunately, it doesn't quite get it right.
Of course, it is impossible to write to a stream without setting its orientation. So in an attempt to comply with this curious requirement, glibc attempts to make a new stream based on the same fd as stderr, using the code pointed to at the end of the OP:
58 if (__builtin_expect (_IO_fwide (stderr, 0) != 0, 1)
59 || (fd = __fileno (stderr)) == -1
60 || (fd = __dup (fd)) == -1
61 || (fp = fdopen (fd, "w+")) == NULL)
62 { ...
which, stripping out the internal symbols, is essentially equivalent to:
if (fwide (stderr, 0) != 0
|| (fd = fileno (stderr)) == -1
|| (fd = dup (fd)) == -1
|| (fp = fdopen (fd, "w+")) == NULL)
{
/* Either stderr has an orientation or the duplication failed,
* so just write to stderr
*/
if (fd != -1) close(fd);
perror_internal(stderr, s, errnum);
}
else
{
/* Write the message to fp instead of stderr */
perror_internal(fp, s, errnum);
fclose(fp);
}
fileno extracts the fd from a standard C library stream. dup takes an fd, duplicates it, and returns the number of the copy. And fdopen creates a standard C library stream from an fd. In short, that doesn't reopen stderr; rather, it creates (or attempts to create) a copy of stderr which can be written to without affecting the orientation of stderr.
Unfortunately, it doesn't work reliably because of the mode:
fp = fdopen(fd, "w+");
That attempts to open a stream which allows both reading and writing. And it will work with the original stderr, which is just a copy of the console fd, originally opened for both reading and writing. But when you bind stderr to some other device with a redirect:
$ ./a.out 2>/dev/null
you are passing the executable an fd opened only for output. And fdopen won't let you get away with that:
The application shall ensure that the mode of the stream as expressed by the mode argument is allowed by the file access mode of the open file description to which fildes refers.
The glibc implementation of fdopen actually checks, and returns NULL with errno set to EINVAL if you specify a mode which requires access rights not available to the fd.
So you could get your test to pass if you redirect stderr for both reading and writing:
$ ./a.out 2<>/dev/null
But what you probably wanted in the first place was to redirect stderr in append mode:
$ ./a.out 2>>/dev/null
and as far as I know, bash does not provide a way to read/append redirect.
I don't know why the glibc code uses "w+" as a mode argument, since it has no intention of reading from stderr. "w" should work fine, although it probably won't preserve append mode, which might have unfortunate consequences.
I'm not sure if there's a good answer to "why" without asking the glibc developers - it may just be a bug - but the POSIX requirement seems to conflict with ISO C, which reads in 7.21.2, ¶4:
Each stream has an orientation. After a stream is associated with an external file, but before any operations are performed on it, the stream is without orientation. Once a wide character input/output function has been applied to a stream without orientation, the stream becomes a wide-oriented stream. Similarly, once a byte input/output function has been applied to a stream without orientation, the stream becomes a byte-oriented stream. Only a call to the freopen function or the fwide function can otherwise alter the orientation of a stream. (A successful call to freopen removes any orientation.)
Further, perror seems to qualify as a "byte I/O function" since it takes a char * and, per 7.21.10.4 ¶2, "writes a sequence of characters".
Since POSIX defers to ISO C in the event of a conflict, there is an argument to be made that the POSIX requirement here is void.
As for the actual examples in the question:
Undefined behavior. A byte I/O function is called on a wide-oriented stream.
Nothing at all controversial. The orientation was correct for calling perror and did not change as a result of the call.
Calling perror oriented the stream to byte orientation. This seems to be required by ISO C but disallowed by POSIX.

tftpGet error from tftpLib in VxWorks

I'm writing a little function that downloads a file from a TFTP server using VxWork's tftpLib (http://www.vxdev.com/docs/vx55man/vxworks/ref/tftpLib.html) - now I realized that my tftpGet() command is returning an error 1 but I'm not sure what errorcode 1 means. On the posted website it says:
ERRNO
S_tftpLib_INVALID_DESCRIPTOR
S_tftpLib_INVALID_ARGUMENT
S_tftpLib_NOT_CONNECTED
But how do I know what 1 corresponds with?
The get portion of my code looks like this:
/* Initialize and createlocal file handle */
pFile = fopen("ngfm.bin","wb");
if (pFile != NULL)
{
/* Get file from TFTP server and write it to the file descriptor */
status = tftpGet (pTftpDesc, pFilename, pFile, TFTP_CLIENT);
printf("GOT %s\n",pFilename);
}
else
{
printf("Error in tftpGet()\nfailed to get %s from %s\nERRNO %d",pFilename,pHost, status);
}
Try this code:
int status;
if (OK == (status = tftpGet (pTftpDesc, pFilename, fd, TFTP_CLIENT))) {
printf("tftpGet() successful\n");
} else {
printf("Error has occurred: %d\n", errno); // errno is where the error is stored
}
No,The problem in fact was, that I didn';t get a valid file pointer but NULL because there's no such thing as a "current directory" like in Linux in VxWorks but I had to change my fopen to say something like pFile = fopen("flash:/ngfm.bin","wb"); instead.

IOCTL call and checking return value

if((err = ioctl(fd, IOC_CARD_LOCK, &lock)) < 0)
{
printf("ioctl failed and returned errno %d \n",err);
}
Is the above code correct and a good programming practice? It does compile on my PC.
i.e does it populate err with the return value of ioctl and check if err is < 0
Is the above method a standard way to return "err" returned by IOCTL.
There seem to be some standard variable called errno?
what is it? Will that be the same as above?
I found out a better way to do this.
if(ioctl(fd, IOC_CARD_LOCK, &lock) < 0)
{
printf("ioctl failed and returned errno %s \n",strerror(errno));
}
errno is a global variable that is set for system calls.and strerror converts the code (a negative integer) into a meaningful string ("Invalid argument" for example.)
Just stumbled over this response. The answer is only partly correct, as printf might overwrite errno - according to the man pages it's mandatory to save errno. So a more robust answer is:
if(ioctl(fd, IOC_CARD_LOCK, &lock) < 0)
{
int errsv = errno;
printf("ioctl failed and returned errno %s \n",strerror(errsv));
}

When will fcntl in solaris return a value less then -1 for F_SETLKW

Fromt he Mannul of fcntl in solaris, Upon successful completion, value returned for F_SETLKW will be "Value other than -1".
But Apache httpd 1.3.41 source code (http_main.c) check if the returned value is positive like:
int ret;
while ((ret = fcntl(lock_fd, F_SETLKW, &unlock_it)) < 0 && errno == EINTR) {
/* nop */
}
if (ret < 0) {
ap_log_error(APLOG_MARK, APLOG_EMERG, server_conf,
"fcntl: F_SETLKW: Error getting accept lock, exiting! "
"Perhaps you need to use the LockFile directive to place "
"your lock file on a local disk!");
clean_child_exit(APEXIT_CHILDFATAL);
}
In very rare case, apache in one of our system will exit beacuse of this failed test. I suspect this was caused by a negative value less than -1 returned by fcntl.
So when will fcntl in solaris return a value less than -1?
in your code sample, fcntl returns <0 (e.g. -1 you know) means might have errors if errno was not EINTR, and if errno == EINTR (interrupted), it is not an error, just suggest retrying again.
"Fromt he Mannul of fcntl in solaris, Upon successful completion, value returned for F_SETLKW will be Value other than -1", meant returns 0 or >0 when success, ">=0" is a value other than -1, not <-1 as you guessed.