Log all messages sent to a stripped class with DTrace - objective-c

It's normally easy to log all messages sent to a class with DTrace, for example by using trace_msg_send.sh script from here:
#!/usr/sbin/dtrace -s
#pragma D option quiet
unsigned long long indention;
int indentation_amount;
BEGIN {
indentation_amount = 4;
}
objc$target:$1::entry
{
method = (string)&probefunc[1];
type = probefunc[0];
class = probemod;
printf("%*s%s %c[%s %s]\n", indention * indentation_amount, "", "->", type, class, method);
indention++;
}
objc$target:$1::return
{
indention--;
method = (string)&probefunc[1];
type = probefunc[0];
class = probemod;
printf("%*s%s %c[%s %s]\n", indention * indentation_amount, "", "<-", type, class, method);
}
Given the code:
$ cat > test.m
#include <stdio.h>
#import <Foundation/Foundation.h>
#interface TestClass : NSObject
+ (int)randomNum;
#end
#implementation TestClass
+ (int)randomNum {
return 4; // chosen by fair dice roll.
// guaranteed to be random.
}
#end
int main(void) {
printf("num: %d\n", [TestClass randomNum]);
return 0;
}
^D
you would get the following output by using the DTrace script:
$ gcc test.m -lobjc -o test
$ ./test
num: 4
$ sudo ./trace_msg_send.d -c ./test TestClass
num: 4
-> +[TestClass randomNum]
<- +[TestClass randomNum]
However if we strip the binary:
$ strip test
$ sudo ./trace_msg_send.d -c ./test TestClass
dtrace: failed to compile script ./trace_msg_send.d: line 11: probe description objc49043:TestClass::entry does not match any probes
the script no longer works.
Is there a way of telling DTrace, where the stripped classes/message implementations are located?
Maybe using the info available from class-dump, like in the answer for Import class-dump info into GDB.
I have created a script that tries to compare the classname at runtime, however it doesn't seem to work:
#!/usr/sbin/dtrace -s
#pragma D option quiet
unsigned long long indention;
int indentation_amount;
BEGIN {
indentation_amount = 4;
}
objc$target:::entry
{
// Assumes 64 bit executable
isaptr = *(uint64_t *)copyin(arg0, 8);
class_rw = *(uint64_t *)copyin(isaptr + 4*8, 8);
class_ro = *(uint64_t *)copyin(class_rw + 8, 8);
classnameptr = *(uint64_t *)copyin(class_ro + 4*4 + 8, 8);
classname = copyinstr(classnameptr);
type = classname == $$1? (string)&probefunc[0]: "";
method = classname == $$1? (string)&probefunc[1]: "";
arrow = classname == $$1? "->": "";
space = classname == $$1? " ": "";
bracket1 = classname == $$1? "[": "";
bracket2 = classname == $$1? "]": "";
current_indention = classname == $$1? indention: 0;
indention = classname == $$1? indention + 1: indention;
newline = classname == $$1? "\n": "";
classname = classname == $$1? classname: "";
printf("%*s%s%s%.1s%s%s%s%s%s%s", current_indention * indentation_amount, "", arrow, space, type, bracket1, classname, space, method, bracket2, newline);
}
Running sudo ./trace_msg_send2.d -c ./test TestClass prints:
num: 4
dtrace: error on enabled probe ID 179 (ID 13922: objc3517:NSObject:+load:entry): invalid address (0x7fff8204e03e) in action #5 at DIF offset 12
dtrace: error on enabled probe ID 186 (ID 13915: objc3517:__IncompleteProtocol:+load:entry): invalid address (0x7fff8204e01f) in action #5 at DIF offset 12
dtrace: error on enabled probe ID 195 (ID 13906: objc3517:Protocol:+load:entry): invalid address (0x7fff8204e034) in action #5 at DIF offset 12
-> +[TestClass initialize]
where -> -[TestClass randomNum:] is missing.

Related

How can I wait for a Rust `Child` process whose stdout has been piped to another?

I want to implement yes | head -n 1 in Rust, properly
connecting pipes and checking exit statuses: i.e., I want to be able to
determine that yes exits due to SIGPIPE and that head completes
normally. The pipe functionality is easy (Rust Playground):
use std::process;
fn a() -> std::io::Result<()> {
let child_1 = process::Command::new("yes")
.arg("abracadabra")
.stdout(process::Stdio::piped())
.spawn()?;
let pipe: process::ChildStdout = child_1.stdout.unwrap();
let child_2 = process::Command::new("head")
.args(&["-n", "1"])
.stdin(pipe)
.stdout(process::Stdio::piped())
.spawn()?;
let output = child_2.wait_with_output()?;
let result = String::from_utf8_lossy(&output.stdout);
assert_eq!(result, "abracadabra\n");
println!("Good from 'a'.");
Ok(())
}
But while we can wait on child_2 at any point, the declaration of
pipe moves child_1, and so it’s not clear how to wait on child_1.
If we were to simply add child_1.wait()? before the assert_eq!, we’d
hit a compile-time error:
error[E0382]: borrow of moved value: `child_1`
--> src/main.rs:15:5
|
8 | let pipe: process::ChildStdout = child_1.stdout.unwrap();
| -------------- value moved here
...
15 | child_1.wait()?;
| ^^^^^^^ value borrowed here after partial move
|
= note: move occurs because `child_1.stdout` has type `std::option::Option<std::process::ChildStdout>`, which does not implement the `Copy` trait
We can manage to wait with unsafe and platform-specific
functionality (Rust Playground):
use std::process;
fn b() -> std::io::Result<()> {
let mut child_1 = process::Command::new("yes")
.arg("abracadabra")
.stdout(process::Stdio::piped())
.spawn()?;
use std::os::unix::io::{AsRawFd, FromRawFd};
let pipe: process::Stdio =
unsafe { FromRawFd::from_raw_fd(child_1.stdout.as_ref().unwrap().as_raw_fd()) };
let mut child_2 = process::Command::new("head")
.args(&["-n", "1"])
.stdin(pipe)
.stdout(process::Stdio::piped())
.spawn()?;
println!("child_1 exited with: {:?}", child_1.wait().unwrap());
println!("child_2 exited with: {:?}", child_2.wait().unwrap());
let mut result_bytes: Vec<u8> = Vec::new();
std::io::Read::read_to_end(child_2.stdout.as_mut().unwrap(), &mut result_bytes)?;
let result = String::from_utf8_lossy(&result_bytes);
assert_eq!(result, "abracadabra\n");
println!("Good from 'b'.");
Ok(())
}
This prints:
child_1 exited with: ExitStatus(ExitStatus(13))
child_2 exited with: ExitStatus(ExitStatus(0))
Good from 'b'.
This is good enough for the purposes of this question, but surely there
must be a safe and portable way to do this.
For comparison, here is how I would approach the task in C (without
bothering to capture child_2’s output):
#include <stdio.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
#define FAILIF(e, msg) do { if (e) { perror(msg); return 1; } } while (0)
void describe_child(const char *name, int status) {
if (WIFEXITED(status)) {
fprintf(stderr, "%s exited %d\n", name, WEXITSTATUS(status));
} else if (WIFSIGNALED(status)) {
fprintf(stderr, "%s signalled %d\n", name, WTERMSIG(status));
} else {
fprintf(stderr, "%s fate unknown\n", name);
}
}
int main(int argc, char **argv) {
int pipefd[2];
FAILIF(pipe(pipefd), "pipe");
pid_t pid_1 = fork();
FAILIF(pid_1 < 0, "child_1: fork");
if (!pid_1) {
FAILIF(dup2(pipefd[1], 1) == -1, "child_1: dup2");
FAILIF(close(pipefd[0]), "child_1: close pipefd");
execlp("yes", "yes", "abracadabra", NULL);
FAILIF(1, "child_1: execlp");
}
pid_t pid_2 = fork();
FAILIF(pid_2 < 0, "child_2: fork");
if (!pid_2) {
FAILIF(dup2(pipefd[0], 0) == -1, "child_2: dup2");
FAILIF(close(pipefd[1]), "child_2: close pipefd");
execlp("head", "head", "-1", NULL);
FAILIF(1, "child_2: execlp");
}
FAILIF(close(pipefd[0]), "close pipefd[0]");
FAILIF(close(pipefd[1]), "close pipefd[1]");
int status_1;
int status_2;
FAILIF(waitpid(pid_1, &status_1, 0) == -1, "waitpid(child_1)");
FAILIF(waitpid(pid_2, &status_2, 0) == -1, "waitpid(child_2)");
describe_child("child_1", status_1);
describe_child("child_2", status_2);
return 0;
}
Save to test.c and run with make test && ./test:
abracadabra
child_1 signalled 13
child_2 exited 0
Use Option::take:
let pipe = child_1.stdout.take().unwrap();
let child_2 = process::Command::new("head")
.args(&["-n", "1"])
.stdin(pipe)
.stdout(process::Stdio::piped())
.spawn()?;
let output = child_2.wait_with_output()?;
child_1.wait()?;
See also:
How do I move out of a struct field that is an Option?
Unable to pipe to or from spawned child process more than once
How do I read the output of a child process without blocking in Rust?
Reading and writing to a long-running std::process::Child
Cannot move out of borrowed content when unwrapping

JNI, How to access the current Java thread in JNI

is there a way to get Java thread(ID, name) from JNI. I am not talking about passing Thread.currentThread().getId() from java to JNI. Does JNI provide API to access currently running thread?
You can (as mentioned by Alex) resort to java.lang.Thread.
// First, we have to find Thread class
jclass cls = (*env)->FindClass(env, "java/lang/Thread");
// Then, we can look for it's static method 'currentThread'
/* Remember that you can always get method signature using javap tool
> javap -s -p java.lang.Thread | grep -A 1 currentThread
public static native java.lang.Thread currentThread();
descriptor: ()Ljava/lang/Thread;
*/
jmethodID mid =
(*env)->GetStaticMethodID(env, cls, "currentThread", "()Ljava/lang/Thread;");
// Once you have method, you can call it. Remember that result is
// a jobject
jobject thread = (*env)->CallStaticObjectMethod(env, cls, mid);
if( thread == NULL ) {
printf("Error while calling static method: currentThread\n");
}
// Now, we have to find another method - 'getId'
/* Remember that you can always get method signature using javap tool
> javap -s -p java.lang.Thread | grep -A 1 getId
public long getId();
descriptor: ()Jjavap -s -p java.lang.Thread | grep -A 1 currentThread
*/
jmethodID mid_getid =
(*env)->GetMethodID(env, cls, "getId", "()J");
if( mid_getid == NULL ) {
printf("Error while calling GetMethodID for: getId\n");
}
// This time, we are calling instance method, note the difference
// in Call... method
jlong tid = (*env)->CallLongMethod(env, thread, mid_getid);
// Finally, let's call 'getName' of Thread object
/* Remember that you can always get method signature using javap tool
> javap -s -p java.lang.Thread | grep -A 1 getName
public final java.lang.String getName();
descriptor: ()Ljava/lang/String;
*/
jmethodID mid_getname =
(*env)->GetMethodID(env, cls, "getName", "()Ljava/lang/String;");
if( mid_getname == NULL ) {
printf("Error while calling GetMethodID for: getName\n");
}
// As above, we are calling instance method
jobject tname = (*env)->CallObjectMethod(env, thread, mid_getname);
// Remember to retrieve characters from String object
const char *c_str;
c_str = (*env)->GetStringUTFChars(env, tname, NULL);
if(c_str == NULL) {
return;
}
// display message from JNI
printf("[C ] name: %s id: %ld\n", c_str, tid);
// and make sure to release allocated memory before leaving JNI
(*env)->ReleaseStringUTFChars(env, tname, c_str);
You can find full sample here: https://github.com/mkowsiak/jnicookbook/tree/master/recipes/recipeNo044

How to lower CPU usage of finish_task_switch(), called by epoll_wait?

I've written a simple epoll-driven server to bench network/io performance. The server simply receive a request and send a response immediately. It is slower than redis-server 'get', 38k/s vs 40k/s. Both use redis-benchmark as the load runner, and both used cpu up (>99%).
bench redis-server: redis-benchmark -n 1000000 -c 20 -t get -p 6379
bench myserver : redis-benchmark -n 1000000 -c 20 -t get -p 6399
I've profiled them using linux perf, eliminated epoll_ctl in myserver (as what redis-server does). Now the problems is function finish_task_switch() takes too much cpu time, about 10%-15% (for redis-server and redis-benchmark are 3%, on the same machine).
The call flow (read it top-down) is -> epoll_wait(25%) -> entry_SYSCALL_64_after_hwframe(23.56%) -> do_syscall_64(23.23%) -> sys_epoll_wait(22.36%) -> ep_poll(21.88%) -> schedule_hrtimeout_range(12.98%) -> schedule_hrtimeout_range_clock(12.74%) -> schedule(11.30%) -> _schedule(11.30%) -> finish_task_switch(10.82%)
I've tried writing the server using raw epoll api, and using redis's api in redis/src/ae.c, nothing changed.
I've examined how redis-server and redis-benchmark use epoll, no tricks found.
The redis CFLAGS is used for myserver, same as redis-benchmark.
The CPU usage has nothing to do with level/edge-triggered, block or nonblock client fd, whether epoll_wait's timeout set or not.
#include <sys/epoll.h>
#include <sys/socket.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h> // exit
#include <string.h> // memset
#include "anet.h"
#define MAX_EVENTS 32
typedef struct {
int fd;
char querybuf[256];
} client;
client *clients;
char err[256];
#define RESPONSE_REDIS "$128\r\nxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\r\n"
static int do_use_fd(client *c)
{
int n = read(c->fd, c->querybuf, sizeof(c->querybuf));
if (n == 0) { printf("Client Closed\n"); return n; }
n = write(c->fd, RESPONSE_REDIS, sizeof(RESPONSE_REDIS)-1);
return n;
}
int main()
{
struct epoll_event ev, events[MAX_EVENTS];
int listen_sock, conn_sock, nfds, epollfd;
epollfd = epoll_create(MAX_EVENTS);
listen_sock = anetTcpServer(err, 6399, NULL, MAX_EVENTS);
ev.events = EPOLLIN;
ev.data.fd = listen_sock;
epoll_ctl(epollfd, EPOLL_CTL_ADD, listen_sock, &ev);
clients = (client *)malloc(sizeof(client) * MAX_EVENTS);
memset(clients, 0, sizeof(client) * MAX_EVENTS);
for (;;) {
int n;
struct sockaddr addr;
socklen_t addrlen = sizeof(addr);
nfds = epoll_wait(epollfd, events, MAX_EVENTS, 100);
for (n = 0; n < nfds; ++n) {
if (events[n].data.fd == listen_sock) {
conn_sock = accept(listen_sock,
(struct sockaddr *) &addr, &addrlen);
anetNonBlock(err, conn_sock);
ev.events = EPOLLIN;
//ev.events = EPOLLIN | EPOLLET;
ev.data.fd = conn_sock;
epoll_ctl(epollfd, EPOLL_CTL_ADD, conn_sock,&ev);
clients[conn_sock].fd = conn_sock;
} else {
client *c = &clients[events[n].data.fd];
int ret = do_use_fd(c);
if (ret == 0) {
epoll_ctl(epollfd, EPOLL_CTL_DEL, c->fd, &ev);
}
}
}
}
}
Server's listen fd is blocked. set it nonblocked lowers the usage of finish_task_switch to <2%.

Passing an inlined CArray in a CStruct to a shared library using NativeCall

This is a follow-up question to "How to declare native array of fixed size in Perl 6?".
In that question it was discussed how to incorporate an array of a fixed size into a CStruct. In this answer it was suggested to use HAS to inline a CArray in the CStruct. When I tested this idea, I ran into some strange behavior that could not be resolved in the comments section below the question, so I decided to write it up as a new question. Here is is my C test library code:
slib.c:
#include <stdio.h>
struct myStruct
{
int A;
int B[3];
int C;
};
void use_struct (struct myStruct *s) {
printf("sizeof(struct myStruct): %ld\n", sizeof( struct myStruct ));
printf("sizeof(struct myStruct *): %ld\n", sizeof( struct myStruct *));
printf("A = %d\n", s->A);
printf("B[0] = %d\n", s->B[0]);
printf("B[1] = %d\n", s->B[1]);
printf("B[2] = %d\n", s->B[2]);
printf("C = %d\n", s->C);
}
To generate a shared library from this i used:
gcc -c -fpic slib.c
gcc -shared -o libslib.so slib.o
Then, the Perl 6 code:
p.p6:
use v6;
use NativeCall;
class myStruct is repr('CStruct') {
has int32 $.A is rw;
HAS int32 #.B[3] is CArray is rw;
has int32 $.C is rw;
}
sub use_struct(myStruct $s) is native("./libslib.so") { * };
my $s = myStruct.new();
$s.A = 1;
$s.B[0] = 2;
$s.B[1] = 3;
$s.B[2] = 4;
$s.C = 5;
say "Expected size of Perl 6 struct: ", (nativesizeof(int32) * 5);
say "Actual size of Perl 6 struct: ", nativesizeof( $s );
say 'Number of elements of $s.B: ', $s.B.elems;
say "B[0] = ", $s.B[0];
say "B[1] = ", $s.B[1];
say "B[2] = ", $s.B[2];
say "Calling library function..";
say "--------------------------";
use_struct( $s );
The output from the script is:
Expected size of Perl 6 struct: 20
Actual size of Perl 6 struct: 24
Number of elements of $s.B: 3
B[0] = 2
B[1] = 3
B[2] = 4
Calling library function..
--------------------------
sizeof(struct myStruct): 20
sizeof(struct myStruct *): 8
A = 1
B[0] = 0 # <-- Expected 2
B[1] = 653252032 # <-- Expected 3
B[2] = 22030 # <-- Expected 4
C = 5
Questions:
Why does nativesizeof( $s ) give 24 (and not the expected value of 20)?
Why is the content of the array B in the structure not as expected when printed from the C function?
Note:
I am using Ubuntu 18.04 and Perl 6 Rakudo version 2018.04.01, but have also tested with version 2018.05
Your code is correct. I just fixed that bug in MoarVM, and added tests to rakudo, similar to your code:
In C:
typedef struct {
int a;
int b[3];
int c;
} InlinedArrayInStruct;
In Perl 6:
class InlinedArrayInStruct is repr('CStruct') {
has int32 $.a is rw;
HAS int32 #.b[3] is CArray;
has int32 $.c is rw;
}
See these patches:
https://github.com/MoarVM/MoarVM/commit/ac3d3c76954fa3c1b1db14ea999bf3248c2eda1c
https://github.com/rakudo/rakudo/commit/f8b79306cc1900b7991490eef822480f304a56d9
If you are not building rakudo (and also NQP and MoarVM) directly from latest source from github, you probably have to wait for the 2018.08 release that will appear here: https://rakudo.org/files

Why does my pam work for sudo and not for ssh?

I have written a PAM module that displays a QR-code when we do the $ sudo su command. The PAM only displays the QR code, no verification is made and no password is asked.
I tried to use this PAM with ssh but nothing is displayed on the screen. Does anybody know why ?
Now I have half of a qr code...
This function is drawing the qr code on the terminal :
void output_ansi(FILE * file, const struct qr_bitmap * bmp)
{
const char * out[2] = {
" ",
"\033[7m \033[0m",
};
unsigned char * line;
size_t x, y;
line = bmp->bits;
for (y = 0; y < bmp->height; ++y) {
fprintf(file, "%d\n",y );
for (x = 0; x < bmp->width; ++x) {
int mask = 1 << (x % CHAR_BIT);
int byte = line[x / CHAR_BIT];
fprintf(file, "%s", out[!!(byte & mask)]);
}
fputc('\n', file);
line += bmp->stride;
}
}
But by displaying the value of y in the for loop, I noticed that the first 9 lines are not printed... does anybody know what is the problem ?