I am trying to build and simulate a project that references INET (the INETWirelessTutorial actually), however I get the error below that says " Error: Cannot load library '../inet/src//libINET.dll': A dynamic link library (DLL) initialization routine failed". My project WireTryLiveson references project INET. I wonder if I need to link path environment variables differently? I tried the following:
Open "RSVPPacket.msg", modify it then change it back, and save it (which was suggested by a google group when I searched "libinet.dll")
Build INET
Neither seemed to work. INET Example projects simulate just fine.
What should I change so I can load libINET.dll?
You can see the libINET.dll here on the Project Explorer on the left.
Here are my environment variables.
Here's what the OMNET++ Console displays with my error:
Starting...
$ cd C:/inetModule/WireTryLiveson
$ WireTryLiveson.exe -m -n .;../inet/src;../inet/examples;../inet/tutorials;../inet/showcases --image-path=../inet/images -l ../inet/src/INET omnetpp.ini
OMNeT++ Discrete Event Simulation (C) 1992-2017 Andras Varga, OpenSim Ltd.
Version: 5.1.1, build: 170508-adbabd0, edition: Academic Public License -- NOT FOR COMMERCIAL USE
See the license for distribution terms and warranty disclaimer
<!> Error: Cannot load library '../inet/src//libINET.dll': A dynamic link library (DLL) initialization routine failed
End.
Simulation terminated with exit code: -1073741819
Working directory: C:/inetModule/WireTryLiveson
Command line: WireTryLiveson.exe -m -n .;../inet/src;../inet/examples;../inet/tutorials;../inet/showcases --image-path=../inet/images -l ../inet/src/INET omnetpp.ini
Environment variables:
PATH=;C:/inetModule/inet/src;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\bin;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\mingw64\bin;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\bin;;C:/omnetpp-5.1.1-src-windows/omnetpp-5.1.1/ide/jre/bin/server;C:/omnetpp-5.1.1-src-windows/omnetpp-5.1.1/ide/jre/bin;C:/omnetpp-5.1.1-src-windows/omnetpp-5.1.1/ide/jre/lib/amd64;.;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\bin;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\mingw64\bin;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\local\bin;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\bin;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\bin;C:\Windows\System32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\bin\site_perl;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\bin\vendor_perl;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\bin\core_perl;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1;
OMNETPP_ROOT=C:/omnetpp-5.1.1-src-windows/omnetpp-5.1.1/
OMNETPP_IMAGE_PATH=C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\images
Here's my omnetpp.ini:
[General]
network = WirelessTry
sim-time-limit = 25s
*.host*.networkLayer.arpType = "GlobalARP"
*.hostA.numUdpApps = 1
*.hostA.udpApp[0].typename = "UDPBasicApp"
*.hostA.udpApp[0].destAddresses = "hostB"
*.hostA.udpApp[0].destPort = 5000
*.hostA.udpApp[0].messageLength = 1000B
*.hostA.udpApp[0].sendInterval = exponential(12ms)
*.hostA.udpApp[0].packetName = "UDPData"
*.hostB.numUdpApps = 1
*.hostB.udpApp[0].typename = "UDPSink"
*.hostB.udpApp[0].localPort = 5000
*.host*.wlan[0].typename = "IdealWirelessNic"
*.host*.wlan[0].mac.useAck = false
*.host*.wlan[0].mac.fullDuplex = false
*.host*.wlan[0].radio.transmitter.communicationRange = 500m
*.host*.wlan[0].radio.receiver.ignoreInterference = true
*.host*.**.bitrate = 1Mbps
Here's my file wirelessTry.ned:
import inet.common.figures.DelegateSignalConfigurator;
import inet.networklayer.configurator.ipv4.IPv4NetworkConfigurator;
import inet.node.inet.INetworkNode;
import inet.physicallayer.contract.packetlevel.IRadioMedium;
import inet.visualizer.integrated.IntegratedCanvasVisualizer;
network WirelessTry
{
parameters:
string hostType = default("WirelessHost");
string mediumType = default("IdealRadioMedium");
#display("bgb=650,500;bgg=100,1,grey95");
#figure[title](type=label; pos=0,-1; anchor=sw; color=darkblue);
#figure[rcvdPkText](type=indicatorText; pos=420,20; anchor=w; font=,20; textFormat="packets received: %g"; initialValue=0);
#statistic[rcvdPk](source=hostB_rcvdPk; record=figure(count); targetFigure=rcvdPkText);
#signal[hostB_rcvdPk];
#delegatesignal[rcvdPk](source=hostB.udpApp[0].rcvdPk; target=hostB_rcvdPk);
submodules:
visualizer: IntegratedCanvasVisualizer {
#display("p=580,125");
}
configurator: IPv4NetworkConfigurator {
#display("p=580,200");
}
radioMedium: <mediumType> like IRadioMedium {
#display("p=580,275");
}
figureHelper: DelegateSignalConfigurator {
#display("p=580,350");
}
hostA: <hostType> like INetworkNode {
#display("p=50,325");
}
hostB: <hostType> like INetworkNode {
#display("p=450,325");
}
}
First of all go to INET properties | OMNeT++ | Makemake | select src | Options... | Compile tab | More >> and ensure that you have set "Export include path for other projects" and "Force compiling object files for use in DLLs". And in Target tab set "Export this shared/static library for other projects". Then rebuild INET.
If it doesn't help, then try the following workaround:
In properties of your project (e.g. WireTryLiveson):
select inet in Project References
go to OMNeT++ | Makemake | select src | Options... | Target and select "Shared library (.dll, .so or .dylib)"
go to OMNeT++ | Makemake | select src | Options... | Compile and select "Add include paths exported from referenced projects"
In Run | Run Configurations... of your project select opp_run and ensure that Working dir actually indicates the directory which contains omnetpp.ini.
Related
I have maintained my user.lua project folder specific. Is there anything in place where I can exclude Zerobrane's env path when I check a Module require statement with "Evaluate in Console"?
The reason for this is , i will ensure that everything is working within the plugin Engine himself.
This is what would be checked for a missing Module
lualibs and bin is cerobrane specific, if I see it right
Output
local toast = require("toast")
[string " local toast = require("toast")"]:1: module 'toast' not found:
no field package.preload['toast']
no file 'lualibs/toast.lua'
no file 'lualibs/toast/toast.lua'
no file 'lualibs/toast/init.lua'
no file './toast.lua'
no file '/usr/local/share/luajit-2.0.4/toast.lua'
no file '/usr/local/share/lua/5.1/toast.lua'
no file '/usr/local/share/lua/5.1/toast/init.lua'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Internals/toast.lua'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Internals/toast/init.lua'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Modules/toast.lua'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Modules/toast/init.lua'
no file 'bin/clibs/libtoast.dylib'
no file 'bin/clibs/toast.dylib'
no file './toast.so'
no file '/usr/local/lib/lua/5.1/toast.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Internals/libtoast_64.so'
Here is my user.lua file at this time
--[[--
Use this file to specify **User** preferences.
Review [examples](+/Applications/ZeroBraneStudio.app/Contents/ZeroBraneStudio/cfg/user-sample.lua) or check [online documentation](http://studio.zerobrane.com/documentation.html) for details.
--]]--
--https://studio.zerobrane.com/doc-general-preferences#debugger
-- to automatically open files requested during debugging
editor.autoactivate = true
--enable verbose output
--debugger.verbose=true
--[[--
specify how print results should be redirected in the application being debugged (v0.39+). Use 'c' for ‘copying’ (appears in the application output and the Output panel), 'r' for ‘redirecting’ (only appears in the Output panel), or 'd' for ‘default’ (only appears in the application output). This is mostly useful for remote debugging to specify how the output should be redirected.
--]]--
debugger.redirect="c"
-- to force execution to continue immediately after starting debugging;
-- set to `false` to disable (the interpreter will stop on the first line or
-- when debugging starts); some interpreters may use `true` or `false`
-- by default, but can be still reconfigured with this setting.
debugger.runonstart = true
-- FlyWithLua.ini version 2.7.6 build 2018-10-24
-- Where to search for modules.
-- use this to evaluate your project folder , select the print function / right Mousebutton --> Evaluate in Console
--print(ide.filetree.projdir)
ZBSProjDir = "/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua"
INTERNALS_DIRECTORY = ZBSProjDir .. "/Internals/"
MODULES_DIRECTORY = ZBSProjDir .. "/Modules/"
package.path = table.concat({
package.path,
INTERNALS_DIRECTORY .. "?.lua",
INTERNALS_DIRECTORY .. "?/init.lua",
MODULES_DIRECTORY .. "?.lua",
MODULES_DIRECTORY .. "?/init.lua",
}, ";")
package.cpath = table.concat({
package.cpath,
INTERNALS_DIRECTORY .. "?.ext",
MODULES_DIRECTORY .. "?.ext",
}, ";")
-- Produce a correct name pattern for binary modules for OS and architecture.
-- This resolves clash between OS X and Linux binary modules by requiring "lib"
-- prefix for Linux ones.
local library_pattern = "?_64."
if SYSTEM == "IBM" then
library_pattern = library_pattern .. "dll"
elseif SYSTEM == "APL" then
library_pattern = library_pattern .. "so"
else
library_pattern = "lib" .. library_pattern .. "so"
end
package.cpath = package.cpath:gsub("?.ext", library_pattern)
Version --> ZeroBrane Studio (1.90; MobDebug 0.706)
Greetings Lars
You should get the desired effect if toast module file is located in your project directory. When a command is executed in the Console, the current directory is set to the project directory, so even though lualibs folder from the IDE may be in the path, it should make no difference (unless you copied the module to lualibs).
I had the following situation: I'm in a live user mode debugging session and I wanted to show the win32k!_W32Process structure. Unfortunately, win32k is a kernel mode SYS file, so the symbols are not available in the user mode session.
I know that I can always load a DLL, EXE or SYS as a dump file and then inspect the symbols. Usually I would do that via File/Open Crash Dump.
This time, I wanted to show the participants of a debugging workshop that it's possible to debug multiple systems at the same time, so I opened the Win32K.sys via WinDbg's command prompt:
0:003> |
. 0 id: 10fc attach name: [...]\NetHeaps.exe
0:003> .opendump C:\Windows\winsxs\[...]\win32k.sys
Loading Dump File [C:\Windows\winsxs\[...]\win32k.sys]
Opened 'C:\Windows\winsxs\[...]\win32k.sys'
||0:0:003>
As we can now see, we have 2 systems and I'm currently on the live debugging system:
||0:0:003> ||
. 0 Live user mode: <Local>
1 Image file: C:\Windows\winsxs\[...]\win32k.sys
I thought I could switch to the other system now, but that does not work:
||0:0:003> ||1s
^ Illegal debuggee error in '||1s'
I would not have worried too much, but it can't find the symbols of win32k in this case:
||0:0:003> .reload
Reloading current modules
...........................
||0:0:003> dt win32k!_W32Process
Symbol win32k!_W32Process not found.
The problem is not in the || command, it's in the .opendump command.
The help says:
After you use the .opendump command, you must use the g (Go) command to finish loading the dump file.
Be aware that this will also run your live process. Therefore, freeze the threads first (~*f) and unfreeze later (~*u).
After that you can switch the system and display the type:
||1:1:004> ||
0 Live user mode: <Local>
. 1 Image file: C:\Windows\winsxs\[...]\win32k.sys
||1:1:004> dt _W32Process
win32k!_W32PROCESS
+0x000 Process : Ptr64 _EPROCESS
+0x008 RefCount : Uint4B
+0x00c W32PF_Flags : Uint4B
[...]
Executive summary: I want to use GDB to extract the coverage execution counts stored in memory in my embedded target, and use them to create .gcda files (for feeding to gcov/lcov).
The setup:
I can successfully cross-compile my binary, targeting my specific embedded target - and then execute it under QEMU.
I can also use QEMU's GDB support to debug the binary (i.e. use tar extended-remote localhost:... to attach to the running QEMU GDB server, and fully control the execution of my binary).
Coverage:
Now, to perform "on-target" coverage analysis, I cross-compile with
-fprofile-arcs -ftest-coverage. GCC then emits 64-bit counters to keep track of execution counts of specific code blocks.
Under normal (i.e. host-based, not cross-compiled) execution, when the app finishes __gcov_exit is called - and gathers all these execution counts into .gcdafiles (that gcov then uses to report coverage details).
In my embedded target however, there's no filesystem to speak of - and libgcov basically contains empty stubs for all __gcov_... functions.
Workaround via QEMU/GDB: To address this, and do it in a GCC-version-agnostic way, I could list the coverage-related symbols in my binary via MYPLATFORM-readelf, and grep-out the relevant ones (e.g. __gcov0.Task1_EntryPoint, __gcov0.worker, etc):
$ MYPLATFORM-readelf -s binary | grep __gcov
...
46: 40021498 48 OBJECT LOCAL DEFAULT 4 __gcov0.Task1_EntryPoint
...
I could then use the offsets/sizes reported to automatically create a GDB script - a script that extracts the counters' data via simple memory dumps (from offset, dump length bytes to a local file).
What I don't know (and failed to find any relevant info/tool), is how to convert the resulting pairs of (memory offset,memory data) into .gcda files. If such a tool/script exists, I'd have a portable (platform-agnostic) way to do coverage on any QEMU-supported platform.
Is there such a tool/script?
Any suggestions/pointers would be most appreciated.
UPDATE: I solved this myself, as you can read below - and wrote a blog post about it.
Turned out there was a (much) better way to do what I wanted.
The Linux kernel includes portable GCOV related functionality, that abstracts away the GCC version-specific details by providing this endpoint:
size_t convert_to_gcda(char *buffer, struct gcov_info *info)
So basically, I was able to do on-target coverage via the following steps:
Step 1
I added three slightly modified versions of the linux gcov files to my project: base.c, gcc_4_7.c and gcov.h. I had to replace some linux-isms inside them - like vmalloc,kfree, etc - to make the code portable (and thus, compileable on my embedded platform, which has nothing to do with Linux).
Step 2
I then provided my own __gcov_init...
typedef struct tagGcovInfo {
struct gcov_info *info;
struct tagGcovInfo *next;
} GcovInfo;
GcovInfo *headGcov = NULL;
void __gcov_init(struct gcov_info *info)
{
printf(
"__gcov_init called for %s!\n",
gcov_info_filename(info));
fflush(stdout);
GcovInfo *newHead = malloc(sizeof(GcovInfo));
if (!newHead) {
puts("Out of memory!");
exit(1);
}
newHead->info = info;
newHead->next = headGcov;
headGcov = newHead;
}
...and __gcov_exit:
void __gcov_exit()
{
GcovInfo *tmp = headGcov;
while(tmp) {
char *buffer;
int bytesNeeded = convert_to_gcda(NULL, tmp->info);
buffer = malloc(bytesNeeded);
if (!buffer) {
puts("Out of memory!");
exit(1);
}
convert_to_gcda(buffer, tmp->info);
printf("Emitting %6d bytes for %s\n", bytesNeeded, gcov_info_filename(tmp->info));
free(buffer);
tmp = tmp->next;
}
}
Step 3
Finally, I scripted my GDB (driving QEMU remotely) via this:
$ cat coverage.gdb
tar extended-remote :9976
file bin.debug/fputest
b base.c:88 <================= This breaks on the "Emitting" printf in __gcov_exit
commands 1
silent
set $filename = tmp->info->filename
set $dataBegin = buffer
set $dataEnd = buffer + bytesNeeded
eval "dump binary memory %s 0x%lx 0x%lx", $filename, $dataBegin, $dataEnd
c
end
c
quit
And finally, executed both QEMU and GDB - like this:
$ # In terminal 1:
qemu-system-MYPLATFORM ... -kernel bin.debug/fputest -gdb tcp::9976 -S
$ # In terminal 2:
MYPLATFORM-gdb -x coverage.gdb
...and that's it - I was able to generate the .gcda files in my local filesystem, and then see coverage results over gcov and lcov.
UPDATE: I wrote a blog post showing the process in detail.
I have a compressed Hadoop SequenceFile from a customer which I'd like to inspect. I do not have full schema information at this time (which I'm working on separately).
But in the interim (and in the hopes of a generic solution), what are my options for inspecting the file?
I found a tool forqlift: http://www.exmachinatech.net/01/forqlift/
And have tried 'forqlift list' on the file. It complains that it can't load classes for the custom subclass Writables included. So I will need to track down those implementations.
But is there any other option available in the meantime? I understand that most likely I can't extract the data, but is there some tool for scanning how many key values and of what type?
From shell:
$ hdfs dfs -text /user/hive/warehouse/table_seq/000000_0
or directly from hive (which is much faster for small files, because it is running in an already started JVM)
hive> dfs -text /user/hive/warehouse/table_seq/000000_0
works for sequence files.
Check the SequenceFileReadDemo class in the 'Hadoop : The Definitive Guide'- Sample Code. The sequence files have the key/value types embedded in them. Use the SequenceFile.Reader.getKeyClass() and SequenceFile.Reader.getValueClass() to get the type information.
My first thought would be to use the Java API for sequence files to try to read them. Even if you don't know which Writable is used by the file, you can guess and check the error messages (there may be a better way that I don't know).
For example:
private void readSeqFile(Path pathToFile) throws IOException {
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
SequenceFile.Reader reader = new SequenceFile.Reader(fs, pathToFile, conf);
Text key = new Text(); // this could be the wrong type
Text val = new Text(); // also could be wrong
while (reader.next(key, val)) {
System.out.println(key + ":" + val);
}
}
This program would crash if those are the wrong types, but the Exception should say which Writable type the key and value actually are.
Edit:
Actually if you do less file.seq usually you can read some of the header and see what the Writable types are (at least for the first key/value). On one file, for example, I see:
SEQ^F^Yorg.apache.hadoop.io.Text"org.apache.hadoop.io.BytesWritable
I'm not a Java or Hadoop programmer, so my way of solving problem could be not the best one, but anyway.
I spent two days solving the problem of reading FileSeq locally (Linux debian amd64) without installation of hadoop.
The provided sample
while (reader.next(key, val)) {
System.out.println(key + ":" + val);
}
works well for Text, but didn't work for BytesWritable compressed input data.
What I did?
I downloaded this utility for creating (writing SequenceFiles Hadoop data)
github_com/shsdev/sequencefile-utility/archive/master.zip
, and got it working, then modified for reading input Hadoop SeqFiles.
The instruction for Debian running this utility from scratch:
sudo apt-get install maven2
sudo mvn install
sudo apt-get install openjdk-7-jdk
edit "sudo vi /usr/bin/mvn",
change `which java` to `which /usr/lib/jvm/java-7-openjdk-amd64/bin/java`
Also I've added (probably not required)
'
PATH="/home/mine/perl5/bin${PATH+:}${PATH};/usr/lib/jvm/java-7-openjdk-amd64/"; export PATH;
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/
export JAVA_VERSION=1.7
'
to ~/.bashrc
Then usage:
sudo mvn install
~/hadoop_tools/sequencefile-utility/sequencefile-utility-master$ /usr/lib/jvm/java-7-openjdk-amd64/bin/java -jar ./target/sequencefile-utility-1.0-jar-with-dependencies.jar
-- and this doesn't break the default java 1.6 installation that is required for FireFox/etc.
For resolving error with FileSeq compatability (e.g. "Unable to load native-hadoop library for your platform... using builtin-java classes where applicable"), I used the libs from the Hadoop master server as is (a kind of hack):
scp root#10.15.150.223:/usr/lib/libhadoop.so.1.0.0 ~/
sudo cp ~/libhadoop.so.1.0.0 /usr/lib/
scp root#10.15.150.223:/usr/lib/jvm/java-6-sun-1.6.0.26/jre/lib/amd64/server/libjvm.so ~/
sudo cp ~/libjvm.so /usr/lib/
sudo ln -s /usr/lib/libhadoop.so.1.0.0 /usr/lib/libhadoop.so.1
sudo ln -s /usr/lib/libhadoop.so.1.0.0 /usr/lib/libhadoop.so
One night drinking coffee, and I've written this code for reading FileSeq hadoop input files (using this cmd for running this code "/usr/lib/jvm/java-7-openjdk-amd64/bin/java -jar ./target/sequencefile-utility-1.3-jar-with-dependencies.jar -d test/ -c NONE"):
import org.apache.hadoop.io.*;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.SequenceFile.ValueBytes;
import java.io.DataOutputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
Path file = new Path("/home/mine/mycompany/task13/data/2015-08-30");
reader = new SequenceFile.Reader(fs, file, conf);
long pos = reader.getPosition();
logger.info("GO from pos "+pos);
DataOutputBuffer rawKey = new DataOutputBuffer();
ValueBytes rawValue = reader.createValueBytes();
int DEFAULT_BUFFER_SIZE = 1024 * 1024;
DataOutputBuffer kobuf = new DataOutputBuffer(DEFAULT_BUFFER_SIZE);
kobuf.reset();
int rl;
do {
rl = reader.nextRaw(kobuf, rawValue);
logger.info("read len for current record: "+rl+" and in more details ");
if(rl >= 0)
{
logger.info("read key "+new String(kobuf.getData())+" (keylen "+kobuf.getLength()+") and data "+rawValue.getSize());
FileOutputStream fos = new FileOutputStream("/home/mine/outb");
DataOutputStream dos = new DataOutputStream(fos);
rawValue.writeUncompressedBytes(dos);
kobuf.reset();
}
} while(rl>0);
I've just added this chunk of code to file src/main/java/eu/scape_project/tb/lsdr/seqfileutility/SequenceFileWriter.java just after the line
writer = SequenceFile.createWriter(fs, conf, path, keyClass,
valueClass, CompressionType.get(pc.getCompressionType()));
Thanks to these sources of info:
Links:
If using hadoop-core instead of mahour, then will have to download asm-3.1.jar manually:
search_maven_org/remotecontent?filepath=org/ow2/util/asm/asm/3.1/asm-3.1.jar
search_maven_org/#search|ga|1|asm-3.1
The list of avaliable mahout repos:
repo1_maven_org/maven2/org/apache/mahout/
Intro to Mahout:
mahout_apache_org/
Good resource for learning interfaces and sources of Hadoop Java classes (I used it for writing my own code for reading FileSeq):
http://grepcode.com/file/repo1.maven.org/maven2/com.ning/metrics.action/0.2.7/org/apache/hadoop/io/BytesWritable.java
Sources of project tb-lsdr-seqfilecreator that I used for creating my own project FileSeq reader:
www_javased_com/?source_dir=scape/tb-lsdr-seqfilecreator/src/main/java/eu/scape_project/tb/lsdr/seqfileutility/ProcessParameters.java
stackoverflow_com/questions/5096128/sequence-files-in-hadoop - the same example (read key,value that doesn't work)
https://github.com/twitter/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/RawSequenceFileRecordReader.java - this one helped me (I used reader.nextRaw the same as in nextKeyValue() and other subs)
Also I've changed ./pom.xml for native apache.hadoop instead of mahout.hadoop, but probably this is not required, because the bugs for read->next(key, value) are the same for both so I had to use read->nextRaw(keyRaw, valueRaw) instead:
diff ../../sequencefile-utility/sequencefile-utility-master/pom.xml ./pom.xml
9c9
< <version>1.0</version>
---
> <version>1.3</version>
63c63
< <version>2.0.1</version>
---
> <version>2.4</version>
85c85
< <groupId>org.apache.mahout.hadoop</groupId>
---
> <groupId>org.apache.hadoop</groupId>
87c87
< <version>0.20.1</version>
---
> <version>1.1.2</version>
93c93
< <version>1.1</version>
---
> <version>1.1.3</version>
I was just playing with Dumbo. When you run a Dumbo job on a Hadoop cluster, the output is a sequence file. I used the following to dump out an entire Dumbo-generated sequence file as plain text:
$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.4.jar \
-input totals/part-00000 \
-output unseq \
-inputformat SequenceFileAsTextInputFormat
$ bin/hadoop fs -cat unseq/part-00000
I got the idea from here.
Incidentally, Dumbo can also output plain text.
Following the anwer of Praveen Sripati, here a small example of SequenceFileReadDemo.java from Hadoop the Definitive Guide by Tom White.
Data are in HDFS, in this position : user/hduser/output-hashsort/ and the file is
part-r-00001
In eclipse, in the Arguments folder I've written this string :
and this is part of the output, with the debugger
I just wanted to know how to configure FCKEditor to upload files and images to the server where the website is hosted.
The relevant part for it's config file(i think) looks like this:
FCKConfig.LinkUpload = true ;
FCKConfig.LinkUploadURL = FCKConfig.BasePath + 'filemanager/connectors/' + _QuickUploadLanguage + '/upload.' + _QuickUploadExtension ;
FCKConfig.LinkUploadAllowedExtensions = ".(7z|aiff|asf|avi|bmp|csv|doc|fla|flv|gif|gz|gzip|jpeg|jpg|mid|mov|mp3|mp4|mpc|mpeg|mpg|ods|odt|pdf|png|ppt|pxd|qt|ram|rar|rm|rmi|rmvb|rtf|sdc|sitd|swf|sxc|sxw|tar|tgz|tif|tiff|txt|vsd|wav|wma|wmv|xls|xml|zip)$" ; // empty for all
FCKConfig.LinkUploadDeniedExtensions = "" ; // empty for no one
FCKConfig.ImageUpload = true ;
FCKConfig.ImageUploadURL = FCKConfig.BasePath + 'filemanager/connectors/' + _QuickUploadLanguage + '/upload.' + _QuickUploadExtension + '?Type=Image' ;
FCKConfig.ImageUploadAllowedExtensions = ".(jpg|gif|jpeg|png|bmp)$" ; // empty for all
FCKConfig.ImageUploadDeniedExtensions = "" ; // empty for no one
Could it be a folder permission problem? Is this part of the config.js alright?
You don't state what language you are using. The fileupload functionality in FCKeditor has ASP, .NET, Coldfusion and PHP uploaders, amongst others. It would help if you said what server (IIS/Linux?) and serverside language you are using.
With limited information its a long shot but there's settings in fckconfig.js for configuring your file browser (around line 276) Make sure you have the right language selected:
var _FileBrowserLanguage = 'php' ; // asp | aspx | cfm | lasso | perl | php | py
var _QuickUploadLanguage = 'php' ; // asp | aspx | cfm | lasso | perl | php | py
You'll also have to set write permissions on the folder structure you are uploading to (which might be the cause of the "invalid request" error, but process to edit file permissions is different depending if you are using windows or linux.
Its not well documented, but its also possible to debug the file manager settings by going to the following URLs in a browser:
/fckeditor/editor/filemanager/connectors/test.html
and
/fckeditor/editor/filemanager/connectors/uploadtest.html
The upload test scripts are very useful and can help diagnose many problems - you can see errors easier for a start. Give them a try and you should have a better idea what the problem is.
It's solved, thanks anyway. I just had to add the "Files" type to some variable at the config.aspx file. It only had "Images", so that's why I couldn't upload files