I know the way to bypass filter mode, but I don't know the way to bypass strict mode.
At 64bit The code has:
read 1024 bytes at rwxp mapped buf
run buf()
scanf address and scanf value. write value(long) at address(long)
and has only canary and partial RELRO
at this, how can I bypass strict mode seccomp?
If you find a way, you'll get a CVE!
In seriousness, it is possible to find a way around SECCOMP in that if the handler thread listening to the messages from the SECCOMP-ed jail thread has a length check type bug, you could then exploit the handler thread (which presumably isn't as strongly SECCOMP-ed as the jail thread). Then you'd follow a normal exploit chain.
However, in the general case, folks are putting SECCOMP on a process because it's doing untrusted stuff. As a result it's unlikely that code execution on a jailed thread will allow for priv escalation, simply because it's unlikely the code trusts the inputs from the jailed thread!
Bypassing SECCOMP directly would be really hard, you'd have to find a kernel vulnerability in one of the allowed system calls, or a processor-level vulnerability. In strict mode this is normally considered intractable.
Related
I am hacking OpenJDK7 to implement an algorithm. In the process of doing this, I need to output debug information to the stdout. As I can see in the code base, all printings are done by using outputStream*->print_cr(). I wonder why printf() was not used at all?
Part of the reasons why I'm asking this because I in fact used a lot of printf() calls. And I have been seeing weird bugs such as random memory corruption and random JVM crashing. Is there any chance that my printf() is the root cause? (Assume that the logic of my code is bug-free of course)
why printf() was not used at all?
Instead of using stdio directly, HotSpot utilizes its own printing and logging framework. This extra abstraction layer provides the following benefits:
Allows printing not only to stdout but to an arbitrary stream. Different JVM parts may log to separate streams (e.g. a dedicated stream for GC logs).
Has its own implementation of formatting and buffering that does not allocate memory or use global locks.
Gives control over all output emitted by JVM. For example, all output can be easily supplemented with timestamps.
Facilitates porting to different platforms and environments.
The framework is further improved in JDK 9 to support JEP 158: Unified JVM Logging.
Is there any chance that my printf() is the root cause?
No, unless printf is misused: e.g. arguments do not match format specifiers, or printf is called inside a signal handler. Otherwise it is safe to use printf for debugging. I did so many times when worked on HotSpot.
I'm trying to write a pam backdoor scanner, which may call fopen function in pam_sm_authenticate(normal file will not call fopen in this function) to store username and password, but I can't use external command such as "nm, readelf" or something like that, so the only way seems to scan pam_sm_authenticate function and find all call instructions and caculate the address to check if it is calling fopen, but it is too troublesome and i'm not very familiar with ELF file(I even dont know how to find offset of pam_sm_authenticate, I'm useing dlopen and dlsym to get the address..), so I wonder if there is a better or easy way to detect it? Thankyou.
TL;DR: building a robust "pam backdoor scanner" is theoretically impossible, so you should give up now and think about other ways to solve your problem.
Your question is very confusing, but I think what you are asking is: "can I determine programmatically whether pam_sm_authenticate calls fopen".
That is the wrong question to ask, for several reasons:
if pam_sm_authenticate calls foo, and foo calls fopen, then you still have a problem, so you really should be scanning pam_sm_authenticate and every function it calls (recursively),
the fopen is far from the only way to write files: you could also use open, or system (as in system("echo $secret > /tmp/backdoor"), or direct sys_open syscall, or a multitude of other hacks.
finally, the pam_sm_authenticate can use just-in-time compilation techniques to build arbitrary code (including code calling fopen) at runtime, and answering whether it does by examining its code is equivalent to solving the halting problem (i.e. impossible).
I am evaluating different multiprocessing libraries for a fault tolerant application. I basically need any process to be allowed to crash without stopping the whole application.
I can do it using the fork() system call. The limit here is that the process can be created on the same machine, only.
Can I do the same with MPI? If a process created with MPI crashes, can the parent process keep running and eventually create a new process?
Is there any alternative (possibly multiplatform and open source) library to get the same result?
As reported here, MPI 4.0 will have support for fault tolerance.
If you want collectives, you're going to have to wait for MPI-3.something (as High Performance Mark and Hristo Illev suggest)
If you can live with point-to-point, and you are a patient person willing to raise a bunch of bug reports against your MPI implementation, you can try the following:
disable the default MPI error handler
carefully check every single return code from your MPI programs
keep track in your application which ranks are up and which are down. Oh, and when they go down they can never get back. but you're unable to use collectives anyway (see my opening statement), so that's not a huge deal, right?
Here's an old paper (back when Bill still worked at Argonne. I think it's from 2003):
http://www.mcs.anl.gov/~lusk/papers/fault-tolerance.pdf . It lays out the kinds of fault tolerant things one can do in MPI. Perhaps such a "constrained MPI" might still work for your needs.
If you're willing to go for something research quality, there's two implementations of a potential fault tolerance chapter for a future version of MPI (MPI-4?). The proposal is called User Level Failure Mitigation. There's an experimental version in MPICH 3.2a2 and a branch of Open MPI that also provides the interfaces. Both are far from production quality, but you're welcome to try them out. Just know that since this isn't in the MPI Standard, the function prefixes are not MPI_*. For MPICH, they're MPIX_*, for the Open MPI branch, they're OMPI_* (though I believe they'll be changing theirs to be MPIX_* soon as well.
As Rob Latham mentioned, there will be lots of work you'll need to do within your app to handle failures, though you don't necessarily have to check all of your return codes. You can/should use MPI error handlers as a callback function to simplify things. There's information/examples in the spec available along with the Open MPI branch.
Basically, how can I make sure that in my module, a specific process is current. I've looked at kick_process, but I'm not sure how to have my module execute in the context of that process once kicking it into kernel mode.
I found this related question, but it has no replies. I believe an answer to my question could help that asker as well.
Note: I am aware that if I want the task_struct of a process, I can look it up. I'm interested in running in a specific context since I want to call functions that reference current.
Best way i have found to do anything in the context of a particular process in the kernel, is to sleep in process context(wait_* family of functions) and wake up that thread and do whatever needs to be done in that context. This would ofcourse mean you would have to have the application call into the kernel via IOCTL or something and sleep on that thread and wake it up whenever you need to do something. This seems to be a very widely used and popular mechanism.
I have a script that uses quicklisp to load zs3 for accessing Amazon's S3.
When I run the script with clisp, when (zs3:bucket-exists-p "Test") is run, USOCKET:TIMEOUT-ERROR occurs.
However, when I run it with sbcl, it runs properly.
Do they access usocket differently?
What are the pros and cons of each?
usocket is a compatibility layer which hides the underlying socket API of each Lisp implementation. There is bound to be an impedance mismatch in some cases, but for the most part it should just work.
I suspect zs3 is not often used with CLISP (or perhaps not at all!), and you're seeing the result of that. On the other hand one can generally expect libraries to be well-tested under SBCL since that is the most popular implementation.
Note also that threads are still experimental in CLISP; they are not enabled by default. The fact that sockets are often mixed with threads only decreases the relative use of CLISP + usocket.