Testing net/http? - testing

I am a little confused about how to structure a go web app and its tests. I have read the How to Write Go Code but still don't get it. For example, I have a go project called "beacon" with a beacon.go file at the root. Adding a trivial beacon_test.go file (copied verbatim from http://golang.org/pkg/net/http/httptest/#example_Server) causes this error:
$ go test
# github.com/jelder/beacon
./beacon_test.go:11: main redeclared in this block
previous declaration at ./beacon.go:216
FAIL github.com/jelder/beacon [build failed]
Sure enough, line 11 is func main(). If I instead change the package main line in my beacon_test.go to package hello, I get this error instead:
can't load package: package github.com/jelder/beacon: found packages main (beacon.go) and hello (beacon_test.go) in /Users/jacob/src/github.com/jelder/beacon

beacon_test.go has also a function called main() rename it to TestFirst (or any other name you like as long as it starts with Test, note the uppercase T is important). There is no need for that. Just run go test . from inside the package you are working on (the one containing the *.go files). Post the full files if you need more help.

Related

Print check_cxx_source_runs() detailed output

I have a find package module that utilizes check_cxx_source_runs() to test if package was loaded properly. However, it fails and I am not sure what causes it to.
Is there any way I can print the actual error (as one would see on terminal) from check_cxx_source_runs() rather than just the variable that tells success/failure?
I found the error output in "CMakeError.log" under the build directory.

Go test specific file ouput

I'm new in Go and trying to work on a project.
The structure of the code is like
handlers/
-time.go
-time_test.go
I've already tried go build, make (project) name
The thing is, I want to add some console output inside the code so that I know if a branch is covered or not (or for debugging). Right now, it doesn't work for me.
If I use:
go test -run test_file_path
If will just output
Pass 0.009s
Even I putt.Log("print log") or even fmt.Print("Say Hello") inside the test function.
If I just use go test -v test_file_path I would have undefined variables. The build & test would fail.
Any suggestions? Thanks!g
Just go test ./handlers -v. The handlers is the package path.

How to add a user defined function in QDB Library?

QDB is a database provided by QNX Neutrino package. I went through the QDB documentation to add a user defined SQL function: http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.qdb_en_dev_guide/writing_functions.html?cp=2_0_8
I created a source file which had my user define SQL function written in C and qdb_function structure definition. I built it with a make file to create libudf.so.
As suggested by QDB I added Function = udftag#libudf.so in the qdb.cfg. But while running the qdb in the shell prompt, it is giving the error (in bold):
qdb -I basic -V -R set -v -c /etc/sql/qdb.cfg -s de_DE#cldr -o tempstore=/fs/tmpfs
QDB: No script registered for handling corrupt database.
qdb: processing [TempMainAddressBook]Function - Can't access shared library
and qdb is getting exited immediately.
I have tried following things:
made sure sqlite3 library is added in the make file
source code is in strictly in C by using directive : extern "C" to avoid name mangling as the file extension is .cpp. I also tried with .c extension.
given the absolute path of the libudf.so in qdb.cfg as : Function = udftag#/usr/lib/libudf.so
qdb_funcion struct is properly defined in library's source code only.
tried without using the static declaration of function(mentioned in the qdb docs)
After trying all hits and trials also, I am getting the same error every time which is Can't access shared library
If any one has any idea to resolve this error please share.
Suggestion 1: run qdb by setting LD_DEBUG=1, like in:
LD_DEBUG=1 qdb command line options
This will output a lot of debug information from the dynamic loader as it attempts to locate and then load the .so files. Check what is the path that it output before the "Can't access" message is displayed.
Suggestion 2: obvious but make sure that the permissions are OK for the .so file. Do you have the execution permission set?
Suggestion 3: check if the error message is identical if you completely remove the .so file from the system
Suggestion 4: increase the number of lower-case 'v'-s. QDB likely supports more, with progressively more verbose information provided as you increase the numbers (6 should be enough for full verbosity)

literal string expected error

Please have a look at the following code
with text_io;
use text_io;
procedure hello is
begin
put_line("hello");
new_line(3);
end hello;
When I click "build all" in GPS IDE, I get this error
gnatmake -d -PC:\Users\yohan\firstprogram.gpr
firstprogram.gpr:1:06: literal string expected
firstprogram.gpr:2:01: "end" expected
gnatmake: "C:\Users\yohan\firstprogram.gpr" processing failed
[2013-04-03 13:29:58] process exited with status 4 (elapsed time: 00.47s)
I am very new to Ada, as you can see, this is my first program. Please help.
On the command line, gnatmake will happily compile a file which contains Ada code but has the extension .gpr. GPS knows "better" than that, and insists on treating myfirstprogram.gpr as a GNAT Project file, which of course it isn't.
You'll find life with GNAT much easier if you stick with its file naming conventions: .ads for a spec, .adb for a body, and the file name needs to be the unit name in lower case. In your case, the file should have been called hello.adb.
The simplest approach to creating a GNAT project file in GPS is to go to the Project menu and select New. The only places where you must enter data are on the "Naming the project" page (you might choose firstproject!) and the "Main files" page, where you'd click on the blue + to add hello.adb; you can Forward through the others.
After adding the main file, you can click Apply to install the new project file; now you can Build all and Run.
You may find the GPS tutorial helpful (Help menu, GPS ...)

How to force STORE (overwrite) to HDFS in Pig?

When developing Pig scripts that use the STORE command I have to delete the output directory for every run or the script stops and offers:
2012-06-19 19:22:49,680 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 6000: Output Location Validation Failed for: 'hdfs://[server]/user/[user]/foo/bar More info to follow:
Output directory hdfs://[server]/user/[user]/foo/bar already exists
So I'm searching for an in-Pig solution to automatically remove the directory, also one that doesn't choke if the directory is non-existent at call time.
In the Pig Latin Reference I found the shell command invoker fs. Unfortunately the Pig script breaks whenever anything produces an error. So I can't use
fs -rmr foo/bar
(i. e. remove recursively) since it breaks if the directory doesn't exist. For a moment I thought I may use
fs -test -e foo/bar
which is a test and shouldn't break or so I thought. However, Pig again interpretes test's return code on a non-existing directory as a failure code and breaks.
There is a JIRA ticket for the Pig project addressing my problem and suggesting an optional parameter OVERWRITE or FORCE_WRITE for the STORE command. Anyway, I'm using Pig 0.8.1 out of necessity and there is no such parameter.
At last I found a solution on grokbase. Since finding the solution took too long I will reproduce it here and add to it.
Suppose you want to store your output using the statement
STORE Relation INTO 'foo/bar';
Then, in order to delete the directory, you can call at the start of the script
rmf foo/bar
No ";" or quotations required since it is a shell command.
I cannot reproduce it now but at some point in time I got an error message (something about missing files) where I can only assume that rmf interfered with map/reduce. So I recommend putting the call before any relation declaration. After SETs, REGISTERs and defaults should be fine.
Example:
SET mapred.fairscheduler.pool 'inhouse';
REGISTER /usr/lib/pig/contrib/piggybank/java/piggybank.jar;
%default name 'foobar'
rmf foo/bar
Rel = LOAD 'something.tsv';
STORE Rel INTO 'foo/bar';
Once you use the fs command, there a lot of ways to do this. For an individual file, I wound up adding this to the beginning of my scripts:
-- Delete file (won't work for output, which will be a directory
-- but will work for a file that gets copied or moved during the
-- the script.)
fs -touchz top_100
rm top_100
For a directory
-- Delete dir
fs -rm -r out