How to resolve the issue "Can't find import Effects" in Idris? - idris

I was trying to compile the following "hello world" source file after some time away from Idris using a nix-shell environment (direction):
module Main
import Effects
import Effect.StdIO
hello : Eff () [STDIO]
hello = putStrLn "Hello world!"
main : IO ()
main = run hello
My output was as follows during the experimentation; despite specifying what seems to be the right library, I still get the same error at the end:
[nix-shell:~/workspace/idris_tmp]$ idris --listlibs
00prelude-idx.ibc
prelude
00base-idx.ibc
base
[nix-shell:~/workspace/idris_tmp]$ exit
exit
brandon#brandon-750-170se-DevContainer:~/workspace/idris_tmp
$ nix-shell -p 'idrisPackages.with-packages (with idrisPackages; [ contrib effects ])' -p closurecompiler
these derivations will be built:
/nix/store/j44rkb0fpxqd4qkamrg1ysf9kbx1q2qa-idris-1.3.0.drv
these paths will be fetched (1.55 MiB download, 4.37 MiB unpacked):
/nix/store/1fcldlyp5n0ry22dsqfam68mpra6jky9-idris-effects-1.3.0
/nix/store/a9rjm84pbmvg6dmkdzhl9q1wliyi0q4b-idris-contrib-1.3.0
copying path '/nix/store/a9rjm84pbmvg6dmkdzhl9q1wliyi0q4b-idris-contrib-1.3.0' from 'https://cache.nixos.org'...
copying path '/nix/store/1fcldlyp5n0ry22dsqfam68mpra6jky9-idris-effects-1.3.0' from 'https://cache.nixos.org'...
building '/nix/store/j44rkb0fpxqd4qkamrg1ysf9kbx1q2qa-idris-1.3.0.drv'...
/nix/store/5dny6qnjfq9liya5z1sxvr2g64bqypwl-idris-base-1.3.0/nix-support:
propagated-build-inputs: /nix/store/1fcldlyp5n0ry22dsqfam68mpra6jky9-idris-effects-1.3.0/nix-support/propagated-build-inputs
/nix/store/a9rjm84pbmvg6dmkdzhl9q1wliyi0q4b-idris-contrib-1.3.0/nix-support:
propagated-build-inputs: /nix/store/1fcldlyp5n0ry22dsqfam68mpra6jky9-idris-effects-1.3.0/nix-support/propagated-build-inputs
/nix/store/a5x52wi84jgjiimpnkfpcl3mbpbkf1r4-idris-1.3.0/nix-support:
propagated-build-inputs: /nix/store/1fcldlyp5n0ry22dsqfam68mpra6jky9-idris-effects-1.3.0/nix-support/propagated-build-inputs
[nix-shell:~/workspace/idris_tmp]$ idris --listlibs
00effects-idx.ibc
effects
00base-idx.ibc
base
00prelude-idx.ibc
prelude
00contrib-idx.ibc
contrib
[nix-shell:~/workspace/idris_tmp]$ idris --codegen javascript hello.idr -o hello.js
Can't find import Effects

Try to add -p effects to your commandline as in
idris -p effects --codegen javascript hello.idr -o hello.js
this will load the additional library/package. You can avoid doing this by defining an ipkg file, which describes your build. See here: http://docs.idris-lang.org/en/latest/reference/packages.html

Related

error using yaml-cpp to get integer nested in key hierarchy

Initial Question
I have a config.yaml that has structure similar to
some_other_key: 34
a:
b:
c:
d: 3
I thought I could do
YAML::Node config = YAML::LoadFile(config_filename.c_str());
int x = config["a"]["b"]["c"]["d"].as<int>();
but I get
terminate called after throwing an instance of
'YAML::TypedBadConversion<int>'
what(): bad conversion
How do I descend through my config.yaml to extract a value like this? I also get that same exception if I mistype one of the keys in the path, so I can't tell from the error if I am accidentally working with a null node or if there is a problem converting a valid node's value to int
Follow up After First Replies
Thank you for replying! Maybe it is an issue with what is in the config.yaml? Here is a small example to reproduce,
yaml file: config2.yaml
daq_writer:
num: 3
num_per_host: 3
hosts:
- local
datasets:
small:
chunksize: 600
Python can read it:
Incidentally, I am on linux on rhel7, but from a python 3.6 environment, everything looks good:
$ python -c "import yaml; print(yaml.load(open('config2.yaml','r')))"
{'daq_writer': {'num_per_host': 3, 'num': 3, 'datasets': {'small': {'chunksize': 600}}, 'hosts': ['local']}}
C++ yaml-cpp code
The file yamlex.cpp:
#include <string>
#include <iostream>
#include "yaml-cpp/yaml.h"
int main() {
YAML::Node config = YAML::LoadFile("config2.yaml");
int small_chunksize = config["daq_writer"]["datasets"]["smal"]["chunksize"].as<int>();
}
When I compile and run this, I get:
(lc2) psanagpu101: ~/rel/lc2-hdf5-110 $ c++ --std=c++11 -Iinclude -Llib -lyaml-cpp yamlex.cpp
(lc2) psanagpu101: ~/rel/lc2-hdf5-110 $ LD_LIBRARY_PATH=lib ./a.out
terminate called after throwing an instance of 'YAML::TypedBadConversion<int>'
what(): bad conversion
Aborted (core dumped)
(lc2) psanagpu101: ~/rel/lc2-hdf5-110 $ gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4)
I have been able to read top level keys, like the some_other_key that I referenced above, but I got an error when I went after this nested key. Good to know that syntax works!
You have a typo in your keys: instead of "small", you wrote "smal".

Hello world does work in PE but not in PE64

I'm trying to kind of write my own x64 Hello world program in FASM on Windows
I tried to rewrite this version: How to write to the console in fasm?, to a x64 program like this:
format PE64
entry start
include 'win64a.inc'
section '.code' code readable executable
start:
push hello
call [printf]
pop rcx
push 0
call [ExitProcess]
section '.rdata' data readable
hello db 'Hello world!', 10, 0
section '.idata' import data readable writeable
library kernel,'KERNEL32.DLL', \
msvcrt,'msvcrt.dll'
import kernel, ExitProcess,'ExitProcess'
import msvcrt, printf, 'printf'
Now the problem is that this compiles fine, but when it is executed it doesn't print anything to the console.
Why does this not work, is printf of msvcrt silently incompatible with x64?
Have I overseen something?
Can someone tell me how to rewrite this for it to actually print "Hello world!"?

opensplice dds Hello Word Example

I am posting here after asking the question at the openslice dds forum, and not receiving any reply.I am trying to use opensplice dds on a ubuntu machine. I am not sure if it serves as a proof of proper installation, but I have pasted my release.com file below. Now, I was able to run the ping pong example just fine. But when I ran the executable sac_helloworld_pub ( HelloWorld example in the C programming language), I got the following error
vishal#expmach:~/HDE/x86.linux2.6/examples/dcps/HelloWorld/c/standalone$ ./sac_helloworld_pub
Error in DDS_DomainParticipantFactory_create_participant: Creation failed: invalid handle
I did some searching, and it looks like I need to be running the ospl start command from the terminal. But when I do so, I get a No command ospl found message. Below is the release.comfile's contents
echo "<<< OpenSplice HDE Release V6.3.130716OSS For x86.linux2.6, Date 2013-07-30 >>>"
if [ "${SPLICE_ORB:=}" = "" ]
then
SPLICE_ORB=DDS_OpenFusion_1_6_1
export SPLICE_ORB
fi
if [ "${SPLICE_JDK:=}" = "" ]
then
SPLICE_JDK=jdk
export SPLICE_JDK
fi
OSPL_HOME="/home/vishal/HDE/x86.linux2.6"
OSPL_TARGET=x86.linux2.6
PATH=$OSPL_HOME/bin:$PATH
LD_LIBRARY_PATH=$OSPL_HOME/lib${LD_LIBRARY_PATH:+:}$LD_LIBRARY_PATH
CPATH=$OSPL_HOME/include:$OSPL_HOME/include/sys:${CPATH:=}
OSPL_URI=file://$OSPL_HOME/etc/config/ospl.xml
OSPL_TMPL_PATH=$OSPL_HOME/etc/idlpp
. $OSPL_HOME/etc/java/defs.$SPLICE_JDK
export OSPL_HOME OSPL_TARGET PATH LD_LIBRARY_PATH CPATH OSPL_TMPL_PATH OSPL_URI
$#
release.com (END)
Sorry for the holidays-driven lack of 'reactivity' on the OpenSplice forum .. I've answered your question there though ..
Here's that same answer for completeness:
*For the 6.3 community-edition, the deployment-model changed from shared-memory (v5.x) to the so-called single-process standalone deployment mode where the middleware is simply linked (as libraries) with the application so you don't need to start any daemons first (as was the case for the federated 'shared-memory' mode that was the default in V5).
So its OK that you get the error when trying to call 'ospl' as thats not used anymore so isn't in the distribution.
Now to your issue, your release.com looks OK to me, but perhaps you didn't actually 'source' it in your environment i.e. calling it with a '.' in front of it:
promtp> . release.com
you can verify that by doing an 'echo $OSPL_HOME' in your shell and see if it actually shows the value of the env. variable as set by the release.com.
Hope that helps,
-Hans*

Import class-dump info into GDB

Is there a way to import the output from class-dump into GDB?
Example code:
$ cat > test.m
#include <stdio.h>
#import <Foundation/Foundation.h>
#interface TestClass : NSObject
+ (int)randomNum;
#end
#implementation TestClass
+ (int)randomNum {
return 4; // chosen by fair dice roll.
// guaranteed to be random.
}
#end
int main(void) {
printf("num: %d\n", [TestClass randomNum]);
return 0;
}
^D
$ gcc test.m -lobjc -o test
$ ./test
num: 4
$ gdb test
...
(gdb) b +[TestClass randomNum]
Breakpoint 1 at 0x100000e5c
(gdb) ^D
$ strip test
$ gdb test
...
(gdb) b +[TestClass randomNum]
Function "+[TestClass randomNum]" not defined.
(gdb) ^D
$ class-dump -A test
...
#interface TestClass : NSObject
{
}
+ (int)randomNum; // IMP=0x0000000100000e50
#end
I know I can now use b *0x0000000100000e50 in gdb, but is there a way of modifying GDB's symbol table to make it accept b +[TestClass randomNum]?
Edit: It would be preferably if it would work with GDB v6 and not only GDB v7, as GDB v6 is the latest version with Apple's patches.
It’s possible to load a symbol file in gdb with the add-symbol-file command. The hardest part is to produce this symbol file.
With the help of libMachObjC (which is part of class-dump), it’s very easy to dump all addresses and their corresponding Objective-C methods. I have written a small tool, objc-symbols which does exactly this.
Let’s use Calendar.app as an example. If you try to list the symbols with the nm tool, you will notice that the Calendar app has been stripped:
$ nm -U /Applications/Calendar.app/Contents/MacOS/Calendar
0000000100000000 T __mh_execute_header
0000000005614542 - 00 0000 OPT radr://5614542
But with objc-symbols you can easily retrieve the addresses of all the missing Objective-C methods:
$ objc-symbols /Applications/Calendar.app
00000001000c774c +[CALCanvasAttributedText textWithPosition:size:text:]
00000001000c8936 -[CALCanvasAttributedText createTextureIfNeeded]
00000001000c8886 -[CALCanvasAttributedText bounds]
00000001000c883b -[CALCanvasAttributedText updateBezierRepresentation]
...
00000001000309eb -[CALApplication applicationDidFinishLaunching:]
...
Then, with SymTabCreator you can create a symbol file, which is just actually an empty dylib with all the symbols.
Using objc-symbols and SymTabCreator together is straightforward:
$ objc-symbols /Applications/Calendar.app | SymTabCreator -o Calendar.stabs
You can check that Calendar.stabs contains all the symbols:
$ nm Calendar.stabs
000000010014a58b T +[APLCALSource printingCachedTextSize]
000000010013e7c5 T +[APLColorSource alternateGenerator]
000000010013e780 T +[APLColorSource defaultColorSource]
000000010013e7bd T +[APLColorSource defaultGenerator]
000000010011eb12 T +[APLConstraint constraintOfClass:withProperties:]
...
00000001000309eb T -[CALApplication applicationDidFinishLaunching:]
...
Now let’s see what happens in gdb:
$ gdb --silent /Applications/Calendar.app
Reading symbols for shared libraries ................................. done
Without the symbol file:
(gdb) b -[CALApplication applicationDidFinishLaunching:]
Function "-[CALApplication applicationDidFinishLaunching:]" not defined.
Make breakpoint pending on future shared library load? (y or [n]) n
And after loading the symbol file:
(gdb) add-symbol-file Calendar.stabs
add symbol table from file "Calendar.stabs"? (y or n) y
Reading symbols from /Users/0xced/Calendar.stabs...done.
(gdb) b -[CALApplication applicationDidFinishLaunching:]
Breakpoint 1 at 0x1000309f2
You will notice that the breakpoint address does not exactly match the symbol address (0x1000309f2 vs 0x1000309eb, 7 bytes of difference), this is because gdb automatically recognizes the function prologue and sets the breakpoint just after.
GDB script
You can use this GDB script to automate this, given that the stripped executable is the current target.
Add the script from below to your .gdbinit, target the stripped executable and run the command objc_symbols in gdb:
$ gdb test
...
(gdb) b +[TestClass randomNum]
Function "+[TestClass randomNum]" not defined.
(gdb) objc_symbols
(gdb) b +[TestClass randomNum]
Breakpoint 1 at 0x100000ee1
(gdb) ^D
define objc_symbols
shell rm -f /tmp/gdb-objc_symbols
set logging redirect on
set logging file /tmp/gdb-objc_symbols
set logging on
info target
set logging off
shell target="$(head -1 /tmp/gdb-objc_symbols | head -1 | awk -F '"' '{ print $2 }')"; objc-symbols "$target" | SymTabCreator -o /tmp/gdb-symtab
set logging on
add-symbol-file /tmp/gdb-symtab
set logging off
end
There is no direct way to do this (that I know of), but it seems like a great idea.
And now there is a way to do it... nice answer, 0xced!
The DWARF file format is well documented, IIRC, and, as the lldb source is available, you have a working example of a parser.
Since the source to class-dump is also available, it shouldn't be too hard to modify it to spew DWARF output that could then be loaded into the debugger.
Obviously, you wouldn't be able to dump symbols with full fidelity, but this would probably be quite useful.
You can use DSYMCreator.
With DSYMCreator, you can create a symbol file from an iOS executable binary.
It's a toolchain, so you can use it like this.
$ ./main.py --only-objc /path/to/binary/xxx
Then a file /path/to/binary/xxx.symbol will be created, which is a DWARF format symbol. you can import it to lldb by yourself.
Apart from that, DSYMCreator also supports to export symbols from IDA Pro, you can use it like this.
$ ./main.py /path/to/binary/xxx
YES, just ignore --only-objc flag. Then the IDA Pro will run automatically, and then a file /path/to/binary/xxx.symbol will be created, which is the symbol file.
Thanks 0xced for creating objc-symbols, which is a part of DSYMCreator toolchain.
BTW, https://github.com/tobefuturer/restore-symbol is another choice.

How can I inspect a Hadoop SequenceFile for which I lack full schema information?

I have a compressed Hadoop SequenceFile from a customer which I'd like to inspect. I do not have full schema information at this time (which I'm working on separately).
But in the interim (and in the hopes of a generic solution), what are my options for inspecting the file?
I found a tool forqlift: http://www.exmachinatech.net/01/forqlift/
And have tried 'forqlift list' on the file. It complains that it can't load classes for the custom subclass Writables included. So I will need to track down those implementations.
But is there any other option available in the meantime? I understand that most likely I can't extract the data, but is there some tool for scanning how many key values and of what type?
From shell:
$ hdfs dfs -text /user/hive/warehouse/table_seq/000000_0
or directly from hive (which is much faster for small files, because it is running in an already started JVM)
hive> dfs -text /user/hive/warehouse/table_seq/000000_0
works for sequence files.
Check the SequenceFileReadDemo class in the 'Hadoop : The Definitive Guide'- Sample Code. The sequence files have the key/value types embedded in them. Use the SequenceFile.Reader.getKeyClass() and SequenceFile.Reader.getValueClass() to get the type information.
My first thought would be to use the Java API for sequence files to try to read them. Even if you don't know which Writable is used by the file, you can guess and check the error messages (there may be a better way that I don't know).
For example:
private void readSeqFile(Path pathToFile) throws IOException {
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
SequenceFile.Reader reader = new SequenceFile.Reader(fs, pathToFile, conf);
Text key = new Text(); // this could be the wrong type
Text val = new Text(); // also could be wrong
while (reader.next(key, val)) {
System.out.println(key + ":" + val);
}
}
This program would crash if those are the wrong types, but the Exception should say which Writable type the key and value actually are.
Edit:
Actually if you do less file.seq usually you can read some of the header and see what the Writable types are (at least for the first key/value). On one file, for example, I see:
SEQ^F^Yorg.apache.hadoop.io.Text"org.apache.hadoop.io.BytesWritable
I'm not a Java or Hadoop programmer, so my way of solving problem could be not the best one, but anyway.
I spent two days solving the problem of reading FileSeq locally (Linux debian amd64) without installation of hadoop.
The provided sample
while (reader.next(key, val)) {
System.out.println(key + ":" + val);
}
works well for Text, but didn't work for BytesWritable compressed input data.
What I did?
I downloaded this utility for creating (writing SequenceFiles Hadoop data)
github_com/shsdev/sequencefile-utility/archive/master.zip
, and got it working, then modified for reading input Hadoop SeqFiles.
The instruction for Debian running this utility from scratch:
sudo apt-get install maven2
sudo mvn install
sudo apt-get install openjdk-7-jdk
edit "sudo vi /usr/bin/mvn",
change `which java` to `which /usr/lib/jvm/java-7-openjdk-amd64/bin/java`
Also I've added (probably not required)
'
PATH="/home/mine/perl5/bin${PATH+:}${PATH};/usr/lib/jvm/java-7-openjdk-amd64/"; export PATH;
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/
export JAVA_VERSION=1.7
'
to ~/.bashrc
Then usage:
sudo mvn install
~/hadoop_tools/sequencefile-utility/sequencefile-utility-master$ /usr/lib/jvm/java-7-openjdk-amd64/bin/java -jar ./target/sequencefile-utility-1.0-jar-with-dependencies.jar
-- and this doesn't break the default java 1.6 installation that is required for FireFox/etc.
For resolving error with FileSeq compatability (e.g. "Unable to load native-hadoop library for your platform... using builtin-java classes where applicable"), I used the libs from the Hadoop master server as is (a kind of hack):
scp root#10.15.150.223:/usr/lib/libhadoop.so.1.0.0 ~/
sudo cp ~/libhadoop.so.1.0.0 /usr/lib/
scp root#10.15.150.223:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/libjvm.so ~/
sudo cp ~/libjvm.so /usr/lib/
sudo ln -s /usr/lib/libhadoop.so.1.0.0 /usr/lib/libhadoop.so.1
sudo ln -s /usr/lib/libhadoop.so.1.0.0 /usr/lib/libhadoop.so
One night drinking coffee, and I've written this code for reading FileSeq hadoop input files (using this cmd for running this code "/usr/lib/jvm/java-7-openjdk-amd64/bin/java -jar ./target/sequencefile-utility-1.3-jar-with-dependencies.jar -d test/ -c NONE"):
import org.apache.hadoop.io.*;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.SequenceFile.ValueBytes;
import java.io.DataOutputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
Path file = new Path("/home/mine/mycompany/task13/data/2015-08-30");
reader = new SequenceFile.Reader(fs, file, conf);
long pos = reader.getPosition();
logger.info("GO from pos "+pos);
DataOutputBuffer rawKey = new DataOutputBuffer();
ValueBytes rawValue = reader.createValueBytes();
int DEFAULT_BUFFER_SIZE = 1024 * 1024;
DataOutputBuffer kobuf = new DataOutputBuffer(DEFAULT_BUFFER_SIZE);
kobuf.reset();
int rl;
do {
rl = reader.nextRaw(kobuf, rawValue);
logger.info("read len for current record: "+rl+" and in more details ");
if(rl >= 0)
{
logger.info("read key "+new String(kobuf.getData())+" (keylen "+kobuf.getLength()+") and data "+rawValue.getSize());
FileOutputStream fos = new FileOutputStream("/home/mine/outb");
DataOutputStream dos = new DataOutputStream(fos);
rawValue.writeUncompressedBytes(dos);
kobuf.reset();
}
} while(rl>0);
I've just added this chunk of code to file src/main/java/eu/scape_project/tb/lsdr/seqfileutility/SequenceFileWriter.java just after the line
writer = SequenceFile.createWriter(fs, conf, path, keyClass,
valueClass, CompressionType.get(pc.getCompressionType()));
Thanks to these sources of info:
Links:
If using hadoop-core instead of mahour, then will have to download asm-3.1.jar manually:
search_maven_org/remotecontent?filepath=org/ow2/util/asm/asm/3.1/asm-3.1.jar
search_maven_org/#search|ga|1|asm-3.1
The list of avaliable mahout repos:
repo1_maven_org/maven2/org/apache/mahout/
Intro to Mahout:
mahout_apache_org/
Good resource for learning interfaces and sources of Hadoop Java classes (I used it for writing my own code for reading FileSeq):
http://grepcode.com/file/repo1.maven.org/maven2/com.ning/metrics.action/0.2.7/org/apache/hadoop/io/BytesWritable.java
Sources of project tb-lsdr-seqfilecreator that I used for creating my own project FileSeq reader:
www_javased_com/?source_dir=scape/tb-lsdr-seqfilecreator/src/main/java/eu/scape_project/tb/lsdr/seqfileutility/ProcessParameters.java
stackoverflow_com/questions/5096128/sequence-files-in-hadoop - the same example (read key,value that doesn't work)
https://github.com/twitter/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/RawSequenceFileRecordReader.java - this one helped me (I used reader.nextRaw the same as in nextKeyValue() and other subs)
Also I've changed ./pom.xml for native apache.hadoop instead of mahout.hadoop, but probably this is not required, because the bugs for read->next(key, value) are the same for both so I had to use read->nextRaw(keyRaw, valueRaw) instead:
diff ../../sequencefile-utility/sequencefile-utility-master/pom.xml ./pom.xml
9c9
< <version>1.0</version>
---
> <version>1.3</version>
63c63
< <version>2.0.1</version>
---
> <version>2.4</version>
85c85
< <groupId>org.apache.mahout.hadoop</groupId>
---
> <groupId>org.apache.hadoop</groupId>
87c87
< <version>0.20.1</version>
---
> <version>1.1.2</version>
93c93
< <version>1.1</version>
---
> <version>1.1.3</version>
I was just playing with Dumbo. When you run a Dumbo job on a Hadoop cluster, the output is a sequence file. I used the following to dump out an entire Dumbo-generated sequence file as plain text:
$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.4.jar \
-input totals/part-00000 \
-output unseq \
-inputformat SequenceFileAsTextInputFormat
$ bin/hadoop fs -cat unseq/part-00000
I got the idea from here.
Incidentally, Dumbo can also output plain text.
Following the anwer of Praveen Sripati, here a small example of SequenceFileReadDemo.java from Hadoop the Definitive Guide by Tom White.
Data are in HDFS, in this position : user/hduser/output-hashsort/ and the file is
part-r-00001
In eclipse, in the Arguments folder I've written this string :
and this is part of the output, with the debugger