Compiler: gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC)
OS : CentOS
I have the following code:
void
foo24()
{
int x;
std::cout << x << std::endl;
}
int
main()
{
foo24();
return 0;
}
If -Wall is turned on there is a warning for un-initialized memory.
When I run my executable compiler with -fsanitize=address, I do not get any such warnings or errors.
Any idea why? Is it supposed to deal with only specific kind of errors.
AddressSanitizer (enabled with -fsanitize-address) checks for buffer overflows, not for uninitialized memory. For the latter you should use MemorySanitizer (only available in Clang, enabled with -fsanitize=memory) or Valgrind.
Related
I have a program using cpprestsdk for http querying and websocketpp for subscribing a data stream. The program will crash immediately(it says Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)). But if I comment either of the http querying or subcribing data stream, the program won't crash.
#include <websocketpp/config/asio_client.hpp>
#include <websocketpp/client.hpp>
#include "json.hpp"
#include <iostream>
#include <ctime>
#include <iostream>
#include <cpprest/http_client.h>
#include <cpprest/filestream.h>
#include <vector>
#include <string>
using std::string;
using namespace web;
using std::cout, std::endl;
using std::vector;
using websocketpp::lib::placeholders::_1;
using websocketpp::lib::placeholders::_2;
using websocketpp::lib::bind;
typedef websocketpp::client<websocketpp::config::asio_tls_client> client;
typedef websocketpp::config::asio_client::message_type::ptr message_ptr;
void on_stream_data(websocketpp::connection_hdl hdl, message_ptr msg) {
}
class OrderBook {
public:
void initialize() {
web::http::client::http_client_config cfg;
std::string uri = string("https://fapi.binance.com/fapi/v1/depth?symbol=btcusdt&limit=1000");
web::http::client::http_client client(U(uri), cfg);
web::http::http_request request(web::http::methods::GET);
request.headers().add("Content-Type", "application/x-www-form-urlencoded");
web::http::http_response response = client.request(request).get();
}
int start_stream() {
client c;
std::string uri = string("wss://fstream.binance.com/ws/btcusdt#depth#100ms");
try {
c.set_access_channels(websocketpp::log::alevel::all);
c.clear_access_channels(websocketpp::log::alevel::frame_payload);
c.init_asio();
c.set_message_handler(bind(on_stream_data, ::_1, ::_2));
websocketpp::lib::error_code ec;
client::connection_ptr con = c.get_connection(uri, ec);
if (ec) {
std::cout << "could not create connection because: " << ec.message() << std::endl;
return 0;
}
c.connect(con);
c.run();
} catch (websocketpp::exception const &e) {
std::cout << e.what() << std::endl;
}
}
};
int main(int argc, char *argv[]) {
OrderBook ob;
ob.initialize(); // comment either of these two lines, the program won't crash, otherwise the program will crash once start
std::this_thread::sleep_for(std::chrono::milliseconds(10000000));
ob.start_stream(); // comment either of these two lines, the program won't crash, otherwise the program will crash once start
}
When I run this program in Clion debug mode, Clion show that the crash comes from function in /opt/homebrew/Cellar/boost/1.76.0/include/boost/asio/ssl/detail/impl/engine.ipp
int engine::do_connect(void*, std::size_t)
{
return ::SSL_connect(ssl_);
}
It says Exception: EXC_BAD_ACCESS (code=1, address=0xf000000000)
What's wrong with it? is it because I run two pieces of code using boost::asio, and something shouldn't be initialized twice?
I can compile this and run it fine.
My best bet is that you might be mixing versions, particularly boost versions. A common mode of failure is caused when ODR violations lead to Undefined Behaviour.
Note that these header-only libraries depend on a number of boost libraries that are not header-only (e.g. Boost System, Thread and/or Chrono). You need to compile against the same version as the libraries you link.
If you use distribution packaged versions of any library (cpprestsdk, websocketpp or whatever json library that is you're using) then you'd be safest also using the distribution packaged version of Boost.
I'd personally simplify the situation by just using Boost (Beast for HTTP/websocket, Json for, you guessed it).
Running it all on a test Ubuntu 18.04 the OS Boost 1.65 version, the start_stream sequence triggers this informative error:
[2022-05-22 13:42:11] [fatal] Required tls_init handler not present.
could not create connection because: Connection creation attempt failed
While being UBSAN/ASAN clean. Perhaps that error helps you, once you figure out the configuration problems that made your program crash.
I am currently building an Executable handling application in Objective C and I just wanna know a simple code that can determine if an executable file can be launched (without launching it) or if it is just a loadable one.
Thanks.
Once you've taken care of permission bits and whether the file is a Mach-O, there are three things you need to consider:
File type
CPU compatibility
Fat binaries
File type
Whether your Mach-O is an executable, dylib, kext, etc., can be determined from a field in its header.
From <mach-o/loader.h>:
struct mach_header {
uint32_t magic;
cpu_type_t cputype;
cpu_subtype_t cpusubtype;
uint32_t filetype; // <---
uint32_t ncmds;
uint32_t sizeofcmds;
uint32_t flags;
};
Also from <mach-o/loader.h> you get all possible values for that field:
#define MH_OBJECT 0x1 /* relocatable object file */
#define MH_EXECUTE 0x2 /* demand paged executable file */
#define MH_FVMLIB 0x3 /* fixed VM shared library file */
#define MH_CORE 0x4 /* core file */
#define MH_PRELOAD 0x5 /* preloaded executable file */
#define MH_DYLIB 0x6 /* dynamically bound shared library */
#define MH_DYLINKER 0x7 /* dynamic link editor */
#define MH_BUNDLE 0x8 /* dynamically bound bundle file */
#define MH_DYLIB_STUB 0x9 /* shared library stub for static linking only, no section contents */
#define MH_DSYM 0xa /* companion file with only debug sections */
#define MH_KEXT_BUNDLE 0xb /* x86_64 kexts */
CPU compatibility
Just because it says "executable", doesn't mean it can be launched though. If you take an iOS app and try to execute it on your iMac, you'll get a "Bad CPU type in executable" error message.
The different CPU types are defined in <mach/machine.h>, but the only of comparing against the current CPU type is via defines:
#include <mach/machine.h>
bool is_cpu_compatible(cpu_type_t cputype)
{
return
#ifdef __i386__
cputype == CPU_TYPE_X86
#endif
#ifdef __x86_64__
cputype == CPU_TYPE_X86 || cputype == CPU_TYPE_X86_64
#endif
#ifdef __arm__
cputype == CPU_TYPE_ARM
#endif
#if defined(__arm64__)
cputype == CPU_TYPE_ARM || cputype == CPU_TYPE_ARM64
#endif
;
}
(This will only work if your application has 64-bit slices, so that it always runs as 64-bit when it can. If you want to be able to run as a 32-bit binary and detect whether a 64-bit binary could be run, you'd have to use sysctl on "hw.cpu64bit_capable" together with defined, but then it gets even uglier.)
Fat binaries
Lastly, your binaries could be enclosed in fat headers. If so, you'll simply need to iterate over all slices, find the one corresponding to your current architecture, and check the two conditions above for that.
Implementation
There is no Objective-C API for this that I know of, so you'll have to fall back to C.
Given a pointer to the file's contents and the is_cpu_compatible function from above, you could do it like this:
#include <stdbool.h>
#include <stddef.h>
#include <mach-o/fat.h>
#include <mach-o/loader.h>
bool macho_is_executable(char *file)
{
struct fat_header *fat = (struct fat_header*)file;
// Fat file
if(fat->magic == FAT_CIGAM) // big endian magic
{
struct fat_arch *arch = (struct fat_arch*)(fat + 1);
for(size_t i = 0; i < fat->nfat_arch; ++i)
{
if(is_cpu_compatible(arch->cputype))
{
return macho_is_executable(&file[arch->offset]);
}
}
// File is not for this architecture
return false;
}
// Thin file
struct mach_header *hdr32 = (struct mach_header*)file;
struct mach_header_64 *hdr64 = (struct mach_header_64*)file;
if(hdr32->magic == MH_MAGIC) // little endian magic
{
return hdr32->filetype == MH_EXECUTE && is_cpu_compatible(hdr32->cputype);
}
else if(hdr64->magic == MH_MAGIC_64)
{
return hdr64->filetype == MH_EXECUTE && is_cpu_compatible(hdr64->cputype);
}
// Not a Mach-O
return false;
}
Note that these are still rather basic checks though, which will e.g. not detect corrupt Mach-O's, and which could easily be fooled by malicious files. If you wanted that, you would have to either emulate an operating system and launch the binary within, or get into the research field of theoretical IT and revolutionize the mathematics of provability.
My understanding is you want to distinguish a Mach-O standalone executable from a Mach-O dyld library. A standalone executable will use either:
LC_MAIN load command to denote the entry point, supported since MacOS 10.7
LC_UNIXTHREAD load command , older non-dyld approach to do the same (still supported)
A dyld library will not have either of these Mach-O load commands, so if you detect one of them it means it's a runnable standalone executable. That of course does not imply the binary executable is valid and kernel won't kill it for other reasons.
If you want inspect some test files to verify it I recommend using a free tool called MachOView
I'm trying to use a custom DLL from within Lua. I have a simple DLL, for example, like
extern "C"
{
static int function_1(lua_State* L)
{
std::cout << "[DLL]this is a custom function" << std::endl;
lua_pushnumber(L, 10);
return 1;
}
__declspec(dllexport) int __cdecl luaopen_myDLL(lua_State* L)
{
L = luaL_newstate();
luaL_openlibs(L);
std::cout << "[DLL] being initialized!" << std::endl;
lua_register(L, "fun1", function_1);
luaL_dofile(L, "./run.lua");
return 1;
}
}
written in VS and built as dll.
After running within Lua
package.loadlib("./myDLL.dll", "luaopen_myDLL")()
or
require("myDLL")
the DLL is loaded and runs like expected and also runs the specified run.lua that execute function_1 just fine.
The run.lua has nothing special in it, just something like
f = function_1()
print("[Lua] Function_1 says", f, "\n");
My current issues now are:
I cannot run function_1() from the initial Lua script calling the DLL. Trying to do that I get
attempt to call global 'function_1' (a nil value)
i must use L = luaL_newstate(); inside my C code. For some reason, it doesn't work with the passed lua_State*, which I think is the reason why I cannot call the registered functions from the LUA script loading my DLL. Before running luaL_newstate() my lua_State has a valid address which doesn't change after the newstate.
I could theoretically run any Lua script from within my C library executing the registered functions, but this seems more like a dirty workaround to me.
My question now is if I'm missing something essential?
p.s.: I'm using Lua 5.1
The code below should work. It may not work because of the following reasons:
your binary you use to run initial Lua script (that has require("myDLL") in it) has a different Lua version and/or does not use shared dll.
your Lua headers you use in your C++ code have different Lua version from original lua.exe
you link your project against different Lua version
you compile Lua again with your solution (you must use only headers and already provided .lib file with Lua distribution, if you want to use lua.exe)
To make your code available in Lua you must use Lua headers for proper Lua version and link against proper .lib file and use lua.exe that uses shared library (lua.dll, I guess).
static int function_1(lua_State* L)
{
std::cout << "[DLL]this is a custom function" << std::endl;
lua_pushnumber(L, 10);
return 1;
}
extern "C" int __declspec(dllexport) luaopen_quik(lua_State *L) {
std::cout << "[DLL] being initialized!" << std::endl;
lua_register(L, "fun1", function_1);
luaL_dofile(L, "./run.lua");
return 0;
}
P. S. please provide your solution files so I can help further because it is not an issue with the code. -- it's the linkage issue.
I'm attempting to build an LLVM pass using the instructions here and link it against the copy of LLVM installed by Julia. The pass is currently being compiled successfully, but cmake fails on linking with undefined symbol errors.
[ 50%] Building CXX object VectorizePass/CMakeFiles/VectorizePass.dir/VectorizePass.cpp.o
[100%] Linking CXX shared module libVectorizePass.so
Undefined symbols for architecture x86_64:
"llvm::raw_ostream::write_escaped(llvm::StringRef, bool)", referenced from:
(anonymous namespace)::Hello::runOnFunction(llvm::Function&) in VectorizePass.cpp.o
"llvm::raw_ostream::write(char const*, unsigned long)", referenced from:
llvm::raw_ostream::operator<<(llvm::StringRef) in VectorizePass.cpp.o
There are dozens of undefined symbol errors followed by
"vtable for llvm::Pass", referenced from:
llvm::Pass::Pass(llvm::PassKind, char&) in VectorizePass.cpp.o
NOTE: a missing vtable usually means the first non-inline virtual
member function has no definition.
VectorizePass.cpp
#include "llvm/Pass.h"
#include "llvm/IR/Function.h"
#include "llvm/Support/raw_ostream.h"
using namespace llvm;
namespace {
struct Hello : public FunctionPass {
static char ID;
Hello() : FunctionPass(ID) {}
bool runOnFunction(Function &F) override {
errs() << "Hello: ";
errs().write_escaped(F.getName()) << "\n";
return false;
}
};
}
char Hello::ID = 0;
static RegisterPass<Hello> X("Hello", "My Hello World Pass", false, false);
This is exactly as it is in the tutorial.
CMakeLists.txt (1)
add_library(VectorizePass MODULE VectorizePass.cpp)
SET(CMAKE_CXX_FLAGS "-std=c++11 -Wall -fno-rtti -D__STDC_CONSTANT_MACROS -D__STDC_LIMIT_MACROS -fPIC")
CMakeLists.txt (2)
project(VectorizePass)
set(LLVM_DIR "/Users/user/julia5/deps/build/llvm-3.7.1/build_Release")
set(LLVM_TOOLS_BINARY_DIR "/Users/user/julia5/deps/build/llvm-3.7.1/build_Release/Release/bin")
include_directories(${LLVM_INCLUDE_DIRS})
include_directories("/Users/user/julia5/deps/build/llvm-3.7.1/build_Release/include")
include_directories("/Users/user/julia5/deps/srccache/llvm-3.7.1/include")
link_directories("/Users/user/julia5/deps/build/llvm-3.7.1/build_Release/Release/lib")
link_directories("/Users/user/julia5/deps/build/llvm-3.7.1/build_Release/lib")
link_directories(${LLVM_LINK_DIRS})
add_definitions(${LLVM_DEFINITIONS})
add_subdirectory(VectorizePass)
Clearly, for some reason CMake isn't finding the appropriate object files even though they are in the directories that I have in the link_directories statements. What am I missing? I'm fairly new to CMake so it may be something obvious.
I've attempted to include(AddLLVM) as suggested here but CMake reports that it cannot find AddLLVM. This Stack post also suggests using a regular Makefile but that does not work when compiling out-of-source passes as all the paths in the regular LLVM Makefile.common/Makefile.config are relative and don't work at all.
Have you read this? You need to use add_llvm_loadable_module() macro to define target for your pass.
I am facing a problem with splicing the list with itself. Note that I have gone through splice() on std::list and iterator invalidation
There the question was about two different lists. But my question is about the same list.
mylist.splice(mylist.end(), mylist, ++mylist.begin());
It seems that gcc 3.x is invalidating the moved iterator. So I suppose it is deallocating and allocating the node again. This does not make sense for the same list. SGI does tell that this version of splice should not invalidate any iterators. Is this a bug with gcc 3.x, if it is there any workaround?
In the mean time I was going through the stl_list.h file. But stuck with the transfer() function, I could not find a definition for these.
struct _List_node_base
{
_List_node_base* _M_next; ///< Self-explanatory
_List_node_base* _M_prev; ///< Self-explanatory
static void
swap(_List_node_base& __x, _List_node_base& __y);
void
transfer(_List_node_base * const __first,
_List_node_base * const __last);
void
reverse();
void
hook(_List_node_base * const __position);
void
unhook();
};
Do you have any idea where can I look for these function definitions?
This functions are in the libstdc++ sources, not the headers. In 3.4 it's in libstdc++-v3/src/list.cc
http://gcc.gnu.org/viewcvs/branches/gcc-3_4-branch/libstdc%2B%2B-v3/src/list.cc?view=markup
Have you tried compiling with -D_GLIBCXX_DEBUG ? That will enable the Debug Mode and tell you if you're using invalid iterators or anything else that causes the problem.
I just tried this simple test with GCC 3.4, with and without debug mode, and it worked fine:
#include <list>
#include <iostream>
#include <string>
int main()
{
std::list<std::string> l;
l.push_back("1");
l.push_back("2");
l.push_back("3");
l.push_back("4");
l.push_back("5");
l.push_back("6");
l.splice(l.end(), l, ++l.begin());
for (std::list<std::string>::iterator i = l.begin(), e = l.end(); i != e; ++i)
std::cout << *i << ' ';
std::cout << std::endl;
}
Modifying it further and debugging it I see that no element is destroyed and reallocated when doing the splice, so I suspect the bug is in your program. It's hard to know, as you haven't actually said what the problem is.