Stream.Read on exe files - vb.net

I'm porting this piece of C code to VB.net. The function cat receives a pointer to a file, reads it and rewrites it into the console (works the same way as the TYPE command from DOS). My vb implementation works well for all types of files, except exe files, in which case, writes a lot more of "text" than what actually is in the exe file.
This is the C code:
enum {
BSIZE = 8192
};
int
cat(FILE *f)
{
char buf[BSIZE];
int n;
while((n = fread(buf, 1, BSIZE, f)) > 0)
if(fwrite(buf, 1, n, stdout) != n)
return 0;
return 1;
}
And this is the vb code:
Public Const BSIZE As Integer = 8192
Function cat(ByVal f As Stream) As Integer
Dim buf(BSIZE) As Byte
Dim n As Integer
Do
n = f.Read(buf, 0, BSIZE)
Try
Console.OpenStandardOutput.Write(buf, 0, n)
Catch ex As IOException
Return 0
End Try
Loop While n > 0
Return 1
End Function
For example, I used the C-exe of the program as the file to read. The C version writes about 10 lines of unintelligible characters, while the VB version writes more than 70; among these, I've noticed something interesting:
P§# Mingw runtime failure: VirtualQuery failed for %d bytes at
address %p Unknown pseudo relocation protocol version %d. Unknown
pseudo relocation bit size %d. GCC: (tdm-1) 5.1.0 GCC: (tdm-1)
5.1.0 GCC: (tdm-1) 5.1.0 GCC: (tdm-1) 5.1.0
It looks like the Stream.Read method is reading more than what it should read. Why is this happening?

Related

What's the protocol for calling Raku code from C code?

2023 update The last person to edit this Q deleted the critically important "LATEST LATEST UPDATE" part that #zentrunix had added near the top. I'm reinstating it.
LATEST LATEST UPDATE
Please see my answer below.
Thanks to everyone who took the time to answer and understand this question.
Original question
Say I have my event-driven TCP communications library in C.
From my Raku application, I can call a function in the C library using NativeCall.
my $server = create-server("127.0.0.1", 4000);
Now, from my callback in C (say onAccept) I want to call out to a Raku function in my application (say on-accept(connection) where connection will be a pointer to a C struct).
So, how can I do that: call my Raku function on-accept from my C function onAccept ?
ps. I tried posting using a simple title "How to call Raku code from C code", but for whatever reason stackoverflow.com wouldn't let me do it. Because of that I concocted this fancy title.
I was creating a 32-bit DLL.
We have to explicitly tell CMake to configure a 64-bit build.
cmake -G "Visual Studio 14 2015 Win64" ..
Anyway, now that the code runs, it's not really what I asked for, because the callback is still in C.
It seems that what I asked for it's not really possible.
I tried to use the approach suggested by Haakon, though I'm afraid I don't understand how it would work.
I'm in Windows, and unfortunately, Raku can't find my dlls, even if I put them in C:\Windows\System32. It finds "msvcrt" (C runtime), but not my dlls.
The dll code (Visual Studio 2015).
#include <stdio.h>
#define EXPORTED __declspec(dllexport)
typedef int (*proto)(const char*);
proto raku_callback;
extern EXPORTED void set_callback(proto);
extern EXPORTED void foo(void);
void set_callback(proto arg)
{
printf("In set_callback()..\n");
raku_callback = arg;
}
void foo(void)
{
printf("In foo()..\n");
int res = raku_callback("hello");
printf("Raku return value: %d\n", res);
}
Cmake code for the
CMAKE_MINIMUM_REQUIRED (VERSION 3.1)
add_library (my_c_dll SHARED my_c_dll.c)
Raku code.
use v6.d;
use NativeCall;
sub set_callback(&callback (Str --> int32))
is native("./my_c_dll"){ * }
sub foo()
is native("./my_c_dll"){ * }
sub callback(Str $str --> Int) {
say "Raku callback.. got string: {$str} from C";
return 32;
}
## sub _getch() returns int32 is native("msvcrt") {*};
## print "-> ";
## say "got ", _getch();
set_callback(&callback);
# foo();
When I run
$ raku test-dll.raku
Cannot locate native library '(null)': error 0xc1
in method setup at D:\tools\raku\share\perl6\core\sources
\947BDAB9F96E0E5FCCB383124F923A6BF6F8D76B (NativeCall) line 298
in block set_callback at D:\tools\raku\share\perl6\core\sources
\947BDAB9F96E0E5FCCB383124F923A6BF6F8D76B (NativeCall) line 594
in block <unit> at test-dll.raku line 21
Raku version.
$ raku -v
This is Rakudo version 2020.05.1 built on MoarVM version 2020.05
implementing Raku 6.d.
Another approach could be to save a callback statically in the C library, for example (libmylib.c):
#include <stdio.h>
static int (*raku_callback)(char *arg);
void set_callback(int (*callback)(char * arg)) {
printf("In set_callback()..\n");
raku_callback = callback;
}
void foo() {
printf("In foo()..\n");
int res = raku_callback("hello");
printf("Raku return value: %d\n", res);
}
Then from Raku:
use v6;
use NativeCall;
sub set_callback(&callback (Str --> int32)) is native('./libmylib.so') { * }
sub foo() is native('./libmylib.so') { * }
sub callback(Str $str --> Int) {
say "Raku callback.. got string: {$str} from C";
return 32;
}
set_callback(&callback);
foo();
Output:
In set_callback()..
In foo()..
Raku callback.. got string: hello from C
Raku return value: 32
Raku is a compiled language; depending on the implementation you've got, it will be compiled to MoarVM, JVM or Javascript. Through compilation, Raku code becomes bytecode in the corresponding virtual machine. So it's never, actually, binary code.
However, Raku code seems to be cleverly organized in a way that an object is actually a pointer to a C endpoint, as proved by Haakon Hagland answer.
WRT to your latest problem, please bear in mind that what you are calling is not a path, but a name that is converted to a navive shared library name and also uses local library path conventions to look for them (it's `PATH' on Windows). So if it's not finding it, add local path to it of simply copy the DLL to one of the searched directories.
First of all, my apologies to #Håkon and #raiph.
Sorry for being so obtuse. :)
Håkon's answer does indeed answer my question, although for whatever reason I have failed to see that until now.
Now the code I played with in order to understand Håkon's solution.
// my_c_dll.c
// be sure to create a 64-bit dll
#include <stdio.h>
#define EXPORTED __declspec(dllexport)
typedef int (*proto)(const char*);
proto raku_function;
extern EXPORTED void install_raku_function(proto);
extern EXPORTED void start_c_processing(void);
void install_raku_function(proto arg)
{
printf("installing raku function\n");
raku_function = arg;
}
void start_c_processing(void)
{
printf("* ----> starting C processing..\n");
for (int i = 0; i < 100; i++)
{
printf("* %d calling raku function\n", i);
int res = raku_function("hello");
printf("* %d raku function returned: %d\n", i, res);
Sleep(1000);
}
}
# test-dll.raku
use v6.d;
use NativeCall;
sub install_raku_function(&raku_function (Str --> int32))
is native("./my_c_dll.dll") { * }
sub start_c_processing()
is native("./my_c_dll.dll") { * }
sub my_raku_function(Str $str --> Int)
{
say "# raku function called from C with parameter [{$str}]";
return 32;
}
install_raku_function &my_raku_function;
start { start_c_processing; }
for ^1000 -> $i
{
say "# $i idling in raku";
sleep 1;
}
$ raku test-dll.raku
installing raku function
# 0 idling in raku
* ----> starting C processing..
* 0 calling raku function
# 0 raku function called from C with parameter [hello]
* 0 raku function returned: 32
# 1 idling in raku
* 1 calling raku function
# 1 raku function called from C with parameter [hello]
* 1 raku function returned: 32
# 2 idling in raku
* 2 calling raku function
# 2 raku function called from C with parameter [hello]
* 2 raku function returned: 32
# 3 idling in raku
* 3 calling raku function
# 3 raku function called from C with parameter [hello]
* 3 raku function returned: 32
# 4 idling in raku
* 4 calling raku function
# 4 raku function called from C with parameter [hello]
* 4 raku function returned: 32
# 5 idling in raku
* 5 calling raku function
# 5 raku function called from C with parameter [hello]
* 5 raku function returned: 32
^CTerminate batch job (Y/N)?
^C
What amazes me is that the Raku signature for my_raku_function maps cleanly to the C signature ... isn't Raku wonderful ? :)

problem with sprint/printf with freeRTOS on stm32f7

Since two days I am trying to make printf\sprintf working in my project...
MCU: STM32F722RETx
I tried to use newLib, heap3, heap4, etc, etc. nothing works. HardFault_Handler is run evry time.
Now I am trying to use simple implementation from this link and still the same problem. I suppose my device has some problem with double numbers, becouse program run HardFault_Handler from this line if (value != value) in _ftoa function.( what is strange because this stm32 support FPU)
Do you guys have any idea? (Now I am using heap_4.c)
My compiller options:
target_compile_options(${PROJ_NAME} PUBLIC
$<$<COMPILE_LANGUAGE:CXX>:
-std=c++14
>
-mcpu=cortex-m7
-mthumb
-mfpu=fpv5-d16
-mfloat-abi=hard
-Wall
-ffunction-sections
-fdata-sections
-O1 -g
-DLV_CONF_INCLUDE_SIMPLE
)
Linker options:
target_link_options(${PROJ_NAME} PUBLIC
${LINKER_OPTION} ${LINKER_SCRIPT}
-mcpu=cortex-m7
-mthumb
-mfloat-abi=hard
-mfpu=fpv5-sp-d16
-specs=nosys.specs
-specs=nano.specs
# -Wl,--wrap,malloc
# -Wl,--wrap,_malloc_r
-u_printf_float
-u_sprintf_float
)
Linker script:
/* Highest address of the user mode stack */
_estack = 0x20040000; /* end of RAM */
/* Generate a link error if heap and stack don't fit into RAM */
_Min_Heap_Size = 0x200; /* required amount of heap */
_Min_Stack_Size = 0x400; /* required amount of stack */
/* Specify the memory areas */
MEMORY
{
RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 256K
FLASH (rx) : ORIGIN = 0x08000000, LENGTH = 512K
}
UPDATE:
I don't think so it is stack problem, I have set configCHECK_FOR_STACK_OVERFLOW to 2, but hook function is never called. I found strange think: This soulution works:
float d = 23.5f;
char buffer[20];
sprintf(buffer, "temp %f", 23.5f);
but this solution not:
float d = 23.5f;
char buffer[20];
sprintf(buffer, "temp %f",d);
No idea why passing variable by copy, generate a HardFault_Handler...
You can implement a hard fault handler that at least will provide you with the SP location to where the issue is occurring. This should provide more insight.
https://www.freertos.org/Debugging-Hard-Faults-On-Cortex-M-Microcontrollers.html
It should let you know if your issue is due to a floating point error within the MCU or if it is due to a branching error possibly caused by some linking problem
I also had error with printf when using FreeRTOS for my SiFive HiFive Rev B.
To solve it, I rewrite _fstat and _write functions to change output function of printf
/*
* Retarget functions for printf()
*/
#include <errno.h>
#include <sys/stat.h>
int _fstat (int file, struct stat * st) {
errno = -ENOSYS;
return -1;
}
int _write (int file, char * ptr, int len) {
extern int uart_putc(int c);
int i;
/* Turn character to capital letter and output to UART port */
for (i = 0; i < len; i++) uart_putc((int)*ptr++);
return 0;
}
And create another uart_putc function for UART0 of SiFive HiFive Rev B hardware:
void uart_putc(int c)
{
#define uart0_txdata (*(volatile uint32_t*)(0x10013000)) // uart0 txdata register
#define UART_TXFULL (1 << 31) // uart0 txdata flag
while ((uart0_txdata & UART_TXFULL) != 0) { }
uart0_txdata = c;
}
The newlib C-runtime library (used in many embedded tool chains) internally uses it's own malloc-family routines. newlib maintains some internal buffers and requires some support for thread-safety:
http://www.nadler.com/embedded/newlibAndFreeRTOS.html
hard fault can caused by unaligned Memory Access:
https://www.keil.com/support/docs/3777.htm

Why perror() changes orientation of stream when it is redirected?

The standard says that:
The perror() function shall not change the orientation of the standard error stream.
This is the implementation of perror() in GNU libc.
Following are the tests when stderr is wide-oriented, multibyte-oriented and not oriented, prior to calling perror().
Tests 1) and 2) are OK. The issue is in test 3).
1) stderr is wide-oriented:
#include <stdio.h>
#include <wchar.h>
#include <errno.h>
int main(void)
{
fwide(stderr, 1);
errno = EINVAL;
perror("");
int x = fwide(stderr, 0);
printf("fwide: %d\n",x);
return 0;
}
$ ./a.out
Invalid argument
fwide: 1
$ ./a.out 2>/dev/null
fwide: 1
2) stderr is multibyte-oriented:
#include <stdio.h>
#include <wchar.h>
#include <errno.h>
int main(void)
{
fwide(stderr, -1);
errno = EINVAL;
perror("");
int x = fwide(stderr, 0);
printf("fwide: %d\n",x);
return 0;
}
$ ./a.out
Invalid argument
fwide: -1
$ ./a.out 2>/dev/null
fwide: -1
3) stderr is not oriented:
#include <stdio.h>
#include <wchar.h>
#include <errno.h>
int main(void)
{
printf("initial fwide: %d\n", fwide(stderr, 0));
errno = EINVAL;
perror("");
int x = fwide(stderr, 0);
printf("fwide: %d\n", x);
return 0;
}
$ ./a.out
initial fwide: 0
Invalid argument
fwide: 0
$ ./a.out 2>/dev/null
initial fwide: 0
fwide: -1
Why perror() changes orientation of stream if it is redirected? Is it proper behavior?
How does this code work? What is this __dup trick all about?
TL;DR: Yes, it's a bug in glibc. If you care about it, you should report it.
The quoted requirement that perror not change the stream orientation is in Posix, but does not seem to be required by the C standard itself. However, Posix seems quite insistent that the orientation of stderr not be changed by perror, even if stderr is not yet oriented. XSH 2.5 Standard I/O Streams:
The perror(), psiginfo(), and psignal() functions shall behave as described above for the byte output functions if the stream is already byte-oriented, and shall behave as described above for the wide-character output functions if the stream is already wide-oriented. If the stream has no orientation, they shall behave as described for the byte output functions except that they shall not change the orientation of the stream.
And glibc attempts to implement Posix semantics. Unfortunately, it doesn't quite get it right.
Of course, it is impossible to write to a stream without setting its orientation. So in an attempt to comply with this curious requirement, glibc attempts to make a new stream based on the same fd as stderr, using the code pointed to at the end of the OP:
58 if (__builtin_expect (_IO_fwide (stderr, 0) != 0, 1)
59 || (fd = __fileno (stderr)) == -1
60 || (fd = __dup (fd)) == -1
61 || (fp = fdopen (fd, "w+")) == NULL)
62 { ...
which, stripping out the internal symbols, is essentially equivalent to:
if (fwide (stderr, 0) != 0
|| (fd = fileno (stderr)) == -1
|| (fd = dup (fd)) == -1
|| (fp = fdopen (fd, "w+")) == NULL)
{
/* Either stderr has an orientation or the duplication failed,
* so just write to stderr
*/
if (fd != -1) close(fd);
perror_internal(stderr, s, errnum);
}
else
{
/* Write the message to fp instead of stderr */
perror_internal(fp, s, errnum);
fclose(fp);
}
fileno extracts the fd from a standard C library stream. dup takes an fd, duplicates it, and returns the number of the copy. And fdopen creates a standard C library stream from an fd. In short, that doesn't reopen stderr; rather, it creates (or attempts to create) a copy of stderr which can be written to without affecting the orientation of stderr.
Unfortunately, it doesn't work reliably because of the mode:
fp = fdopen(fd, "w+");
That attempts to open a stream which allows both reading and writing. And it will work with the original stderr, which is just a copy of the console fd, originally opened for both reading and writing. But when you bind stderr to some other device with a redirect:
$ ./a.out 2>/dev/null
you are passing the executable an fd opened only for output. And fdopen won't let you get away with that:
The application shall ensure that the mode of the stream as expressed by the mode argument is allowed by the file access mode of the open file description to which fildes refers.
The glibc implementation of fdopen actually checks, and returns NULL with errno set to EINVAL if you specify a mode which requires access rights not available to the fd.
So you could get your test to pass if you redirect stderr for both reading and writing:
$ ./a.out 2<>/dev/null
But what you probably wanted in the first place was to redirect stderr in append mode:
$ ./a.out 2>>/dev/null
and as far as I know, bash does not provide a way to read/append redirect.
I don't know why the glibc code uses "w+" as a mode argument, since it has no intention of reading from stderr. "w" should work fine, although it probably won't preserve append mode, which might have unfortunate consequences.
I'm not sure if there's a good answer to "why" without asking the glibc developers - it may just be a bug - but the POSIX requirement seems to conflict with ISO C, which reads in 7.21.2, ¶4:
Each stream has an orientation. After a stream is associated with an external file, but before any operations are performed on it, the stream is without orientation. Once a wide character input/output function has been applied to a stream without orientation, the stream becomes a wide-oriented stream. Similarly, once a byte input/output function has been applied to a stream without orientation, the stream becomes a byte-oriented stream. Only a call to the freopen function or the fwide function can otherwise alter the orientation of a stream. (A successful call to freopen removes any orientation.)
Further, perror seems to qualify as a "byte I/O function" since it takes a char * and, per 7.21.10.4 ¶2, "writes a sequence of characters".
Since POSIX defers to ISO C in the event of a conflict, there is an argument to be made that the POSIX requirement here is void.
As for the actual examples in the question:
Undefined behavior. A byte I/O function is called on a wide-oriented stream.
Nothing at all controversial. The orientation was correct for calling perror and did not change as a result of the call.
Calling perror oriented the stream to byte orientation. This seems to be required by ISO C but disallowed by POSIX.

sem_open - valgrind complains about uninitialised bytes

I have a trivial program:
int main(void)
{
const char sname[]="xxx";
sem_t *pSemaphor;
if ((pSemaphor = sem_open(sname, O_CREAT, 0644, 0)) == SEM_FAILED) {
perror("semaphore initilization");
exit(1);
}
sem_unlink(sname);
sem_close(pSemaphor);
}
When I run it under valgrind, I get the following error:
==12702== Syscall param write(buf) points to uninitialised byte(s)
==12702== at 0x4E457A0: __write_nocancel (syscall-template.S:81)
==12702== by 0x4E446FC: sem_open (sem_open.c:245)
==12702== by 0x4007D0: main (test.cpp:15)
==12702== Address 0xfff00023c is on thread 1's stack
==12702== in frame #1, created by sem_open (sem_open.c:139)
The code was extracted from a bigger project where it ran successfully for years, but now it is causing segmentation fault.
The valgrind error from my example is the same as seen in the bigger project, but there it causes a crash, which my small example doesn't.
I see this with glibc 2.27-5 on Debian. In my case I only open the semaphores right at the start of a long-running program and it seems harmless so far - just annoying.
Looking at the code for sem_open.c which is available at:
https://code.woboq.org/userspace/glibc/nptl/sem_open.c.html
It seems that valgrind is complaining about the line (270 as I look now):
if (TEMP_FAILURE_RETRY (__libc_write (fd, &sem.initsem, sizeof (sem_t)))
== sizeof (sem_t)
However sem.initsem is properly initialised earlier in a fairly baroque manner, firstly by explicitly setting fields in the sem.newsem (part of the union), and then once that is done by a call to memset (L226-228):
/* Initialize the remaining bytes as well. */
memset ((char *) &sem.initsem + sizeof (struct new_sem), '\0',
sizeof (sem_t) - sizeof (struct new_sem));
I think that this particular shenanigans is all quite optimal, but we need to make sure that all of the fields of new_sem have actually been initialised... we find the definition in https://code.woboq.org/userspace/glibc/sysdeps/nptl/internaltypes.h.html and it is this wonderful creation:
struct new_sem
{
#if __HAVE_64B_ATOMICS
/* The data field holds both value (in the least-significant 32 bytes) and
nwaiters. */
# if __BYTE_ORDER == __LITTLE_ENDIAN
# define SEM_VALUE_OFFSET 0
# elif __BYTE_ORDER == __BIG_ENDIAN
# define SEM_VALUE_OFFSET 1
# else
# error Unsupported byte order.
# endif
# define SEM_NWAITERS_SHIFT 32
# define SEM_VALUE_MASK (~(unsigned int)0)
uint64_t data;
int private;
int pad;
#else
# define SEM_VALUE_SHIFT 1
# define SEM_NWAITERS_MASK ((unsigned int)1)
unsigned int value;
int private;
int pad;
unsigned int nwaiters;
#endif
};
So if we __HAVE_64B_ATOMICS then the structure has a data field which contains both the value and the nwaiters, otherwise these are separate fields.
In the initialisation of sem.newsem we can see that these are initialised correctly, as follows:
#if __HAVE_64B_ATOMICS
sem.newsem.data = value;
#else
sem.newsem.value = value << SEM_VALUE_SHIFT;
sem.newsem.nwaiters = 0;
#endif
/* pad is used as a mutex on pre-v9 sparc and ignored otherwise. */
sem.newsem.pad = 0;
/* This always is a shared semaphore. */
sem.newsem.private = FUTEX_SHARED;
I'm doing all of this on a 64-bit system, so I think that valgrind is complaining about the initialisation of the 64-bit sem.newsem.data with a 32-bit value since from:
value = va_arg (ap, unsigned int);
We can see that value is defined simply as an unsigned int which will usually still be 32 bits even on a 64-bit system (see What should be the sizeof(int) on a 64-bit machine?), but that should just be an implicit cast to 64-bits when it is assigned.
So I think this is not a bug - just valgrind getting confused.

How to get dylib version information which are in a directory

I wanted to get the dylib version. I've a dylib path for which I wanted to get the version number.
I've tried "otool -L" command and it's giving me the proper output but as per the requirements I can't use it, since I've 100 of dylib in a directory for which I wanted to get the version information and I can't run "otool" command for each dylib through NSTask and NSPipe.
I've also found the NSVersionOfLinkTimeLibrary() function to get the dylib version, but as per the documentation NSVersionOfLinkTimeLibrary returns the version number for linked libraries and not for other dylib.
Any help on this would be helpful.
Thanks.
Omkar
I've solved it by writing my own dylib parser. Below is the code snippet
- (int64_t)getDylibVersion :(NSString *)dylibPth
{
const char* strFilePath = [dylibPth UTF8String];
FILE* fileHandle = fopen(strFilePath, "rb");
struct mach_header mh;
if(fileHandle)
{
size_t bytesRead = fread(&mh, 1, sizeof(mh), fileHandle);
if(bytesRead == sizeof(mh))
{
if((mh.magic == MH_MAGIC_64 || mh.magic == MH_MAGIC) && mh.filetype == MH_DYLIB)
{
for(int j = 0; j < mh.ncmds; j++)
{
union
{
struct load_command lc;
struct dylib_command dc;
} load_command;
if (sizeof(load_command.lc) != fread(&load_command.lc, 1, sizeof(load_command.lc), fileHandle))
goto fail;
switch (load_command.lc.cmd)
{
case LC_SEGMENT:
break;
case LC_UUID:
break;
case LC_DYLD_INFO_ONLY:
break;
case LC_SYMTAB:
break;
case LC_LOAD_DYLIB:
break;
case LC_ID_DYLIB:
{
if (sizeof(load_command) - sizeof(load_command.lc) != fread(&load_command.lc + 1, 1, sizeof(load_command) - sizeof(load_command.lc), dylib_handle))
goto fail;
fclose(fileHandle);
return(load_command.dc.dylib.current_version);
}
default:
break;
}
if (0 != fseek(fileHandle, load_command.lc.cmdsize - sizeof(load_command.lc), SEEK_CUR))
goto fail;
}
}
}
}
fail:
fclose(fileHandle);
return (-1);
}
Note that Mach-O dylib version numbers are encoded as 32-bit unsigned integers, with the major version in the high 16 bits, the minor version in bits 8 through 15, and the patch level in the low 8 bits:
uint32_t version = …;
uint32_t major = version >> 16;
uint32_t minor = (version >> 8) & 0xff;
uint32_t revision = version & 0xff;
Note also that the above code will only work for "thin" binaries. "Fat," multi-architecture binaries start with a fat header, which you'll need to negotiate first to find the slice for your desired architecture. Moreover, the above only works with architectures of the running architecture's endianness.
The way I see it, you have 2 options.
Load each dylib into your process and lookup the Mach-O headers on each, looking for the version numbers. The documentation should be complete and thorough enough to get you started.
Open each dylib as a normal file, and read and parse the Mach-O headers yourself. This avoid having to load each dylib into the process, but it does mean you need to then either parse the Mach-O binary format yourself, or find a library that can do it for you (I don't know of any off the top of my head).