I'm catching quite a few uninitialized value(s) under Valgrind. The finding is expected because its related to to OpenSSL's PRNG:
==5787== Use of uninitialised value of size 8
==5787== at 0x533B449: _x86_64_AES_encrypt_compact (in /usr/local/ssl/lib/libcrypto.so.1.0.0)
==5787== by 0x533B6DA: fips_aes_encrypt (in /usr/local/ssl/lib/libcrypto.so.1.0.0)
==5787== by 0x56FBC47: ??? (in /usr/local/ssl/lib/libcrypto.so.1.0.0)
==5787== by 0x56FBD27: ??? (in /usr/local/ssl/lib/libcrypto.so.1.0.0)
==5787== by 0x56FBE47: ??? (in /usr/local/ssl/lib/libcrypto.so.1.0.0)
==5787== by 0xFFEFFFE17: ???
==5787== Uninitialised value was created by a heap allocation
==5787== at 0x4C28D84: malloc (vg_replace_malloc.c:291)
==5787== by 0x53575AF: CRYPTO_malloc (in /usr/local/ssl/lib/libcrypto.so.1.0.0)
==5787== by 0x53FB52B: drbg_get_entropy (in /usr/local/ssl/lib/libcrypto.so.1.0.0)
==5787== by 0x534C312: fips_get_entropy (in /usr/local/ssl/lib/libcrypto.so.1.0.0)
==5787== by 0x534CABE: FIPS_drbg_instantiate (in /usr/local/ssl/lib/libcrypto.so.1.0.0)
==5787== by 0x53FB94E: RAND_init_fips (in /usr/local/ssl/lib/libcrypto.so.1.0.0)
==5787== by 0x5403F5D: EVP_add_cipher (in /usr/local/ssl/lib/libcrypto.so.1.0.0)
==5787== by 0x507B7C0: SSL_library_init (in /usr/local/ssl/lib/libssl.so.1.0.0)
==5787== by 0x4103E7: DoStartupOpenSSL() (ac-openssl-1.cpp:494)
==5787== by 0x419504: main (main.cpp:69)
==5787==
But I'm having trouble suppressing it (and that's not expected). I'm trying to use the following three rules, which use frame-level wildcards.
{
RAND_init_fips (1)
Memcheck:Cond
...
fun:RAND_init_fips
...
}
{
RAND_init_fips (2)
Memcheck:Value8
...
fun:RAND_init_fips
...
}
{
RAND_init_fips (3)
Memcheck:Value4
...
fun:RAND_init_fips
...
}
I don't want to do things like initialize the memory because of the Debian PRNG fiasco a few years ago. Plus, its the OpenSSL FIPS Object Module, so I can't modify it because the source code and resulting object file are sequestered.
I'm not sure what the issue is because it appears RAND_init_fips surrounded by frame level-wildcards should match the finding. Any ideas what might be going wrong here?
According to Tom Hughes on the Valgrind User's mailing list, its not possible to write the suppression rule:
Related
I want to generate a suppressions file with --gen-suppressions in valgrind.
However, I do not want to have to go through thousands of lines of output the cut and paste out the suppressions and remove the valgrind stack traces / other valgrind output, and resolve .
Is there a way to do this easily? This seems like a very basic use case...
// I want this part vvvvv
{
<insert_a_suppression_name_here>
Memcheck:Leak
match-leak-kinds: reachable
fun:malloc
fun:strdup
fun:_XlcCreateLC
fun:_XlcDefaultLoader
fun:_XOpenLC
fun:_XrmInitParseInfo
obj:/usr/lib/x86_64-linux-gnu/libX11.so.6.3.0
fun:XrmGetStringDatabase
obj:/usr/lib/x86_64-linux-gnu/libX11.so.6.3.0
fun:XGetDefault
fun:GetXftDPI
fun:X11_InitModes_XRandR
fun:X11_InitModes
fun:X11_VideoInit
}
// I do not want this part vvvv
==187526== 2 bytes in 1 blocks are still reachable in loss record 2 of 137
==187526== at 0x483B7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==187526== by 0x4B7C50E: strdup (strdup.c:42)
==187526== by 0x5922D81: _XlcResolveLocaleName (in /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0)
==187526== by 0x5926387: ??? (in /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0)
==187526== by 0x5925956: ??? (in /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0)
==187526== by 0x592615C: _XlcCreateLC (in /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0)
==187526== by 0x5943664: _XlcDefaultLoader (in /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0)
==187526== by 0x592D995: _XOpenLC (in /usr/lib/x86_64-linux-gnu/libX11.so.6.3.0)
It is quite unlikely that all of the suppressions are different.
If you create a suppression like
{
XINIT-1
Memcheck:Leak
match-leak-kinds: reachable
fun:malloc
fun:strdup
fun:_XlcCreateLC
fun:_XlcDefaultLoader
fun:_XOpenLC
fun:_XrmInitParseInfo
obj:/usr/lib/x86_64-linux-gnu/libX11.so.6.3.0
}
Then re-run. Typically the error count will go down very quickly and you will only need to add a fairly small number of suppressions (single or low double digits).
(you need to apply your knowledge of the code and libs(s) to get a sensible stack depth for suppressions - too many stack entries and the suppression will be too specific and you need more suppressions, too few and you risk suppressing real problems).
I receive the following errors from valgrind.
==30996== Conditional jump or move depends on uninitialised value(s)
==30996== at 0x12B28904: ??? (in /usr/lib64/libmlx4-rdmav2.so)
==30996== by 0xE12CF9A: ibv_open_device (in /usr/lib64/libibverbs.so.1.0.0)
==30996== by 0xAAFA03B: btl_openib_component_init (in /sw/arcts/centos7/openmpi/1.10.2-gcc-4.8.5/lib/libmpi.so.12.0.2)
==30996== by 0xAAF0832: mca_btl_base_select (in /sw/arcts/centos7/openmpi/1.10.2-gcc-4.8.5/lib/libmpi.so.12.0.2)
==30996== by 0xAAF0160: mca_bml_r2_component_init (in /sw/arcts/centos7/openmpi/1.10.2-gcc-4.8.5/lib/libmpi.so.12.0.2)
==30996== by 0xAAEE95D: mca_bml_base_init (in /sw/arcts/centos7/openmpi/1.10.2-gcc-4.8.5/lib/libmpi.so.12.0.2)
==30996== by 0xABE96D9: mca_pml_ob1_component_init (in /sw/arcts/centos7/openmpi/1.10.2-gcc-4.8.5/lib/libmpi.so.12.0.2)
==30996== by 0xABE75A8: mca_pml_base_select (in /sw/arcts/centos7/openmpi/1.10.2-gcc-4.8.5/lib/libmpi.so.12.0.2)
==30996== by 0xAA98BD3: ompi_mpi_init (in /sw/arcts/centos7/openmpi/1.10.2-gcc-4.8.5/lib/libmpi.so.12.0.2)
==30996== by 0xAAB87EC: PMPI_Init_thread (in /sw/arcts/centos7/openmpi/1.10.2-gcc-4.8.5/lib/libmpi.so.12.0.2)
==30996== by 0x5D4664: PetscInitialize.part.3 (in /scratch/kfid_flux/ykmizu/ROMLSS/bin/ks_main.x)
==30996== by 0x49B5B4: main (in /scratch/kfid_flux/ykmizu/ROMLSS/bin/ks_main.x)
==30996==
and this error repeats itself over and over again. I don't understand why PetscInitialize would give me a hard time. It's one of the first things I call in my main.c file after I initialize ints and doubles and etc.
PetscInitialize(&argc, &argv, NULL, NULL);
SlepcInitialize(&argc, &argv, NULL, NULL);
PetscViewerPushFormat(PETSC_VIEWER_STDOUT_SELF, PETSC_VIEWER_ASCII_MATLAB);
Are these just false errors? Any help would be greatly appreciated. Getting a little desperate about this. Thank you.
There are discussions here.
It seems that you use Open MPI which is noisy under valgrind. You can try to compiler two versions of PETSc (so two different PETS_ARCHs): one uses the optimized MPI in your system, and another is built using MPICH with the configure option --download-mpich.
For debugging, you can select the PETSC_ARCH compiled with mpich. For performance evaluation, you can select another PETSC_ARCH compiled with optimized MPI of your platform.
Additionaly, if you want to use both PETSc and SLEPc, you can select either PetscInitialize or SlepcInitialize for start their environment. It makes no sense to repeat two times.
I hope it's helpful for you.
I have a large data structure, a tree, that takes up about 2gb in ram. It includes clojure sets in the leaves, and refs as the branches. The tree is built by reading and parsing a large flat file and inserting the rows into the tree. However this takes about 30 seconds. Is there a way I can build the tree once, emit it to a clj file, and then compile the tree into my standalone jar so I can lookup values in the tree without re-reading the large text file? I think this will trim out the 30 second tree build, but also this will help me deploy my standalone jar without needing the text file to come along for the ride.
My first swing at this failed:
(def x (ref {:zebra (ref #{1 2 3 4})}))
#<Ref#6781a7dc: {:zebra #<Ref#709c4f85: #{1 2 3 4}>}>
(def y #<Ref#6781a7dc: {:zebra #<Ref#709c4f85: #{1 2 3 4}>}>)
RuntimeException Unreadable form clojure.lang.Util.runtimeException (Util.java:219)
Embedding data this big in compiled code may not be possible because of size limits imposed upon the JVM. In particular, no single method may exceed 64 KiB in length. Embedding data in the way I describe further below also necessitates including tons of stuff in the class file it's going to live in; doesn't seem like a great idea.
Given that you're using the data structure read-only, you can construct it once, then emit it to a .clj / .edn (that's for edn, the serialization format based on Clojure literal notation), then include that file on your class path as a "resource", so that it's included in the überjar (in resources/ with default Leiningen settings; it'll then get included in the überjar unless excluded by :uberjar-exclusions in project.clj) and read it from the resource at runtime at full speed of Clojure's reader:
(ns foo.core
(:require [clojure.java.io :as io]))
(defn get-the-huge-data-structure []
(let [r (io/resource "huge.edn")
rdr (java.io.PushbackReader. (io/reader r))]
(read r)))
;; if you then do something like this:
(def ds (get-the-huge-data-structure))
;; your app will load the data as soon as this namespace is required;
;; for your :main namespace, this means as soon as the app starts;
;; note that if you use AOT compilation, it'll also be loaded at
;; compile time
You could also not add it to the überjar, but rather add it to the classpath when running your app. This way your überjar itself would not have to be huge.
Handling stuff other than persistent Clojure data could be accomplished using print-method (when serializing) and reader tags (when deserializing). Arthur already demonstrated using reader tags; to use print-method, you'd do something like
(defmethod print-method clojure.lang.Ref [x writer]
(.write writer "#ref ")
(print-method #x writer))
;; from the REPL, after doing the above:
user=> (pr-str {:foo (ref 1)})
"{:foo #ref 1}"
Of course you only need to have the print-method methods defined when serializing; you're deserializing code can leave it alone, but will need appropriate data readers.
Disregarding the code size issue for a moment, as I find the data embedding issue interesting:
Assuming your data structure only contains immutable data natively handled by Clojure (Clojure persistent collections, arbitrarily nested, plus atomic items such as numbers, strings (atomic for this purpose), keywords, symbols; no Refs etc.), you can indeed include it in your code:
(defmacro embed [x]
x)
The generated bytecode will then recreate x without reading anything, by using constants included in the class file and static methods of the clojure.lang.RT class (e.g. RT.vector and RT.map).
This is, of course, how literals are compiled, since the macro above is a noop. We can make things more interesting though:
(ns embed-test.core
(:require [clojure.java.io :as io])
(:gen-class))
(defmacro embed-resource [r]
(let [r (io/resource r)
rdr (java.io.PushbackReader. (io/reader r))]
(read r)))
(defn -main [& args]
(println (embed-resource "foo.edn")))
This will read foo.edn at compile time and embed the result in the compiled code (in the sense of including appropriate constants and code to reconstruct the data in the class file). At run time, no further reading will be performed.
Is this structure something that doesn't change? If not, consider using Java serialization to persist the structure. Deserializing will be much faster than rebuilding every time.
If you can structure the tree to be a single value instead of a tree fo references to many values then you would be able to print the tree and read it. Because refs are not readable you won't be able to treat the entire tree as something readable without doing doing your own parsing.
It may be worth looking into using the extensible reader to add print and read functions for your tree by making it a type.
here is a minimal example of using data-readers to produce references to sets and maps from a string:
first define handlers for the contents of each EDN tag/type
user> (defn parse-map-ref [m] (ref (apply hash-map m)))
#'user/parse-map-ref
user> (defn parse-set-ref [s] (ref (set s)))
#'user/parse-set-ref
Then bind the map data-readers to associate the handlers with textual tags:
(def y-as-string
"#user/map-ref [:zebra #user/set-ref [1 2 3 4]]")
user> (def y (binding [*data-readers* {'user/set-ref user/parse-set-ref
'user/map-ref user/parse-map-ref}]
(read-string y-as-string)))
user> y
#<Ref#6d130699: {:zebra #<Ref#7c165ec0: #{1 2 3 4}>}>
this also works with more deeply nested trees:
(def z-as-string
"#user/map-ref [:zebra #user/set-ref [1 2 3 4]
:ox #user/map-ref [:amimal #user/set-ref [42]]]")
user> (def z (binding [*data-readers* {'user/set-ref user/parse-set-ref
'user/map-ref user/parse-map-ref}]
(read-string z-as-string)))
#'user/z
user> z
#<Ref#2430c1a0: {:ox #<Ref#7cf801ef: {:amimal #<Ref#7e473201: #{42}>}>,
:zebra #<Ref#7424206b: #{1 2 3 4}>}>
producing strings from trees can be accomplished by extending the print-method multimethod, though it would be a lot easier if you define a type for ref-map and ref-set using deftype so the printer can know which ref should produce which strings.
If in general reading them as strings is too slow there are faster binary serialization libraries like protocol buffers.
How can I tell valgrind to stop showing any kind of error related to a certain library? I got lots of reports that look like this:
==24152== Invalid write of size 8
==24152== at 0xD9FF876: ??? (in /usr/lib64/dri/fglrx_dri.so)
==24152== by 0x110647AF: ???
==24152== Address 0x7f3c98553f20 is not stack'd, malloc'd or (recently) free'd
I could prune them by the address (0x7fxxxxxxxxxx is not something that is allocated at userland), but my valgrind build seems not to accept --ignore-ranges=0x7f0000000000-0x7fffffffffff
You can generate suppression-lists using --gen-suppressions=all. Then you can add those to some .supp file under lib/valgrind.
Valgrinding a program that uses openldap2's libldap is a chore because of OpenSSL's use of uninitialized memory. There exists a --ignore-fn option, but only for the massif subcomponent of Valgrind. Is there anything similar for memcheck to exclude traces in which certain functions appear?
==13795== Use of uninitialised value of size 8
==13795== at 0x6A9C8CF: ??? (in /lib64/libz.so.1.2.3)
==13795== by 0x6A9A63B: inflate (in /lib64/libz.so.1.2.3)
==13795== by 0x68035C1: ??? (in /lib64/libcrypto.so.1.0.0)
==13795== by 0x6802B9F: COMP_expand_block (in /lib64/libcrypto.so.1.0.0)
==13795== by 0x64ABBCD: ssl3_do_uncompress (in /lib64/libssl.so.1.0.0)
==13795== by 0x64ACA6F: ssl3_read_bytes (in /lib64/libssl.so.1.0.0)
==13795== by 0x64A9F2F: ??? (in /lib64/libssl.so.1.0.0)
==13795== by 0x56B3E61: ??? (in /usr/lib64/libldap-2.4.so.2.5.4)
==13795== by 0x5E4DB1B: ??? (in /usr/lib64/liblber-2.4.so.2.5.4)
==13795== by 0x5E4E96E: ber_int_sb_read (in /usr/lib64/liblber-2.4.so.2.5.4)
==13795== by 0x5E4B4A6: ber_get_next (in /usr/lib64/liblber-2.4.so.2.5.4)
==13795== by 0x568FB9E: ??? (in /usr/lib64/libldap-2.4.so.2.5.4)
You can create a suppression file and use it to suppress errors coming from certain sources: http://valgrind.org/docs/manual/manual-core.html#manual-core.suppress