I have a simple program written in Golang. It's an API. So inside the project folder, there's a folder named cmd containing my main package (used to initialise the app and defines the endpoints for the API). There's also a folder named after my program, containing multiple files from a package also named after my program. This package serves as the model to do all the necessary queries and contains all the types I have defined.
I also created a folder called test. It contains all my test files under the package named test. The problem is that to run the tests, I have to access my main package ! Is there a way to do that in Golang ? I tried simply using import "cmd/main" but of course it doesn't work.
I also had an idea. Perhaps I could move all my initialising functions (in the cmd folder) to the package named after my program. This way I could do a regular import in test. And I create, inside of cmd, a main.go in the main package that serves as the entry point for the compiler.
I'm new to Go so I'm not really confident. Do you think it's the right way ?
Thanks.
EDIT : Apparently some people think this question is a duplicate, but it's not. Here's the explanation I gave in on of the comments :
I read this post before posting, but it didn't answer my question
because in that post the person has his tests in the main package. The
reason why I asked my question is because I don't want to have my
tests in the main package. I'd rather have them all in a test folder
inside the same package.
What you want to do is not not possible in GO (assuming you want to test private functions).
because I don't want to have my tests in the main package. I'd rather
have them all in a test folder inside the same package.
Your code belongs to different package if you move it into different folder.
This is how GO defines packages https://golang.org/doc/code.html#Organization:
Each package consists of one or more Go source files in a single
directory.
This is how your code structured:
main
| -- main.go (package main)
+ -- test
| -- main_test.go (package test)
It is idiomatic to keep tests in the same folder with code. It is normal if language or framework set some rules that developer has to follow. GO is pretty strict about that.
This is how you can organize your code:
main
| -- main.go (package main)
| -- main_test.go (package main_test)
| -- main_private_test.go (package main)
Often it makes sense to test code against its pubic interfaces. The best way to do that that, is to put tests into different package. GO convention is to keep tests in the same folder what leads to using the same package name. There is a workaround for that issue. You can add _test (package main_test) prefix to package name for your tests.
If, that is not possible to test your code using public interfaces, you can add another file with tests and use package main in that file.
Related
I've got main project structure in 1 folder, and units tests closed in another folder(2 different meson instances). In unit tests i need to include one file from main project(element to be tested). I dont want to specify relative path as i want to be portable between other programmers.
How can i instruct meson to first go back from current folder and then look through application files if there is file i'm looking for? I want to make it that way so any change in code can be tested right away without any copying or modifications.
C:\Users\User1\Project\application
C:\Users\User1\Project\unittests
I need to be able to see files from application while beeing currently on unittests
Declare project dependency at top level meson.build like
project_dep = declare_dependency(include_directories: inc_dir, sources: srcs, dependencies:[...])
Make sure that your main is not in the sources. In test level meson.build
include project_dep like this:
unit_tests_exec = executable('UnitTests', gtest_srcs,
dependencies :[gtest_dep, gmock_dep, project_dep])
You can check how I organized project using meson for Tdd session here:
https://github.com/elvisoric/tdd_session
I am building a project on Github written in Objective-C. It resolves MAC addresses down to manufacturer details. The lookup table is currently stored as text file manuf.txt (from the Wireshark project), which is parsed at run-time, which is costly. I would prefer to compile this down to archived objects at build-time, and load that instead.
I would like to amend the build phases such that I:
Build a simple compiler
Run the compiler, parsing manuf.txt and outputting archived objects
Build the framework
Copy the archived objects into the framwork
I am looking for wisdom on how to achieve steps 1 and 2 using Xcode v7.3 as Xcode provides only a Copy Files phase or a Run Script phase. An example of other projects achieving similar goals would be inspiring.
I suspect that what you are asking is possible, but tricky. The reason is that you will need to write a bunch of class files and then dynamically add them to the project.
Firstly you will need to employ a run script phase to run various tools from the command line to parse your file and generate a number of class files from it. I would suggest looking into various templating engines. For example appledoc uses moustache templates to generate API documentation files. You could use the same technique to generate header and implementation files.
Next, rather than generating archived objects an trying to import into a framework. I think you may be better off generating raw source code, adding it to a project and compiling into a framework. Probably simpler in the long run.
To automatically include the generated code I would look into (which means I haven't actually tried this :-) adding a folder reference to the project rather than an Xcode group. Folder references are an option in the 'Add files to ...' dialog.
Folder references refer to a directory and automatically add the entire contents of that directory to a project. So you can use one to point to the directory where you have generated the source code. This is a much better option than trying to manipulate the project or injecting things into an established framework.
I would prefer to parse the file at runtime. After launch you can look for an already existing output, otherwise parse it one time.
However, I have to do something similar at Objective-Cloud. I simply added a run script build phase and put the compiler call into it.
I am trying to implement a custom IPCore for the Zedboard. In my User_Logic I am including a component (My_Module) from the VHDL module (My_Module.vhd) which I wrote as part of the ISE project. But when I come to generate the bitstream for my design in PlanAhead it asks for the My_Module.ngc as if it is treating it as a blackbox. I though the NGC was only required when using CoreGen IPCores, but it seems it also wants it for any VHDL module included as I guess this is a 'black box'.
The issue is how do I create a NGC file from the VHDL for this module, which is part of an ISE project. As I can't find any function in ISE that allows you to just generate the netlist for one VHDL module. Or can I export this module out into a separate ISE project and then synthesise it to get the .ngc?
Many thanks
Sam
Are you sure you've typed the module name in exactly the same way both in your module .vhd file, and in the file using the module as a component?
Under normal circumstances, if your project includes the module as a .vhd file, it'll just be synthesized along with the rest of your sources - I did a quick test and renamed a component in one of my own projects, and got a complaint about a possibly missing .ngc file (this was in ISE, and not in PlanAhead though).
So the answer is to generate the NGC files by making the modules you want "the top module" you can then run the synthesis to generate the individual NGC. Then proceed as normal when adding IP to a PCore. So adding these NGC files to the netlist folder and modifying the BBD file and all that!
As a note for completion to get the module working you need to set the synthesis setting "Xilinx Specefic" -> and disable "add io buffers"
Are you including My_Module.vhd as a source file in your ISE Project? If you are, check to see that the ISE project doesn't have a yellow question mark next to the My_Module component. If it does, then it needs more information about that component. You should see a little icon with the letters VHD in it in your ISE Implementation Hierarchy View.
Not having come from a C/compiled languages background, I'm finding it hard to get to grips with using Go's packages mechanism to create modular code.
In Python, to import a module and get access to it's functions and whatnot, it's a simple case of
import foo
where foo.py is the name of the module you want to import in the same directory. Otherwise you can add an empty __init__.py into a subfolder and access the modules via
from subfolder import foo
You can then access functions by simply referencing them through the module name, e.g. y = foo.bar(y). This makes it easy to separate logical pieces of code from one another.
In Go however, you specify the package name in the source file itself, e.g.
package foo
at the top of the 'foo' module, which you can then supposedly import through
import (
"foo"
)
and then refer to it through that, i.e. y := foo.Bar(x) . But what I can't wrap my head around is how this works in practice. The relevant docs on golang.org seem terse, and directed to people with more (any) experience using makefiles and compilers.
Can someone please clearly explain how you are meant to modularise your code in Go, the right project structure to do so, and how the compilation process works?
Wiki answer, please feel free to add/edit.
Modularization
Multiple files in the same package
This is just what it sounds like. A bunch of files in the same directory that all start with the same package <name> directive means that they are treated as one big set of code by Go. You can transparently call functions in a.go from b.go. This is mostly for the benefit of code organization.
A fictional example would be a "blog" package might be laid out with blog.go (the main file), entry.go, and server.go. It's up to you. While you could write a blog package in one big file, that tends to affect readability.
Multiple packages
The standard library is done this way. Basically you create modules and optionally install them into $GOROOT. Any program you write can import "<name>" and then call <name>.someFunction()
In practice any standalone or shared components should be compiled into packages. Back to the blog package above, If you wanted to add a news feed, you could refactor server.go into a package. Then both blog.go and news.go would both import "server".
Compilation
I currently use gomake with Makefiles. The Go installation comes with some great include files for make that simplify the creation of a package or a command. It's not hard and the best way to get up to speed with these is to just look at sample makefiles from open source projects and read "How to Write Go Code".
In addition to the package organisation, Like pip in python, use dep https://github.com/golang/dep for go package management. if you use it on existing go package it will automatically build the dependency tree with versions for all the packages being used. when shifting to production server, dep ensure will use Gopkg.toml to install all the required packages.
Just use dep ensure -add , other commands for dep are:
Commands:
init Set up a new Go project, or migrate an existing one
status Report the status of the project's dependencies
ensure Ensure a dependency is safely vendored in the project
version Show the dep version information
check Check if imports, Gopkg.toml, and Gopkg.lock are in sync
Examples:
dep init set up a new project
dep ensure install the project's dependencies
dep ensure -update update the locked versions of all dependencies
dep ensure -add github.com/pkg/errors add a dependency to the project
I'm starting to create a MSBuild scripts for my products, and I've encounter a dilema.
The code is divided into around 25 projects, some wll require obfuscation, some will require strong-name signing; others will require linking into a single file.
All these projects should result in 3 products, with 3 setups.
The question at hand is as follow: How do I divide the MSBuild scripts to make most sense?
Do I create a script for each product? do I create a script for each project? Do I have one script for building, another for obfuscation and so on?
I think this is good idea to have script per product.
To minimaze dublication create reusable "sub-scripts" and import them to main script (this could be done with Import directive).
<Import Project="..\Steps\Step1.proj" />
Script per product sounds like the way to go. You may want to consider having any number of shared or base scripts too, to import common build steps. Like Mike Chaliy already mentioned you can then use Import in your product's build script:
<Import Project="..\Shared\Base.proj" />
Another thing you might also want to take advantage of is target and property overriding. It's akin to overriding virtual methods in a .Net class. See the documentation and the MSBuild Team Blog for more details. I know I've taken advantage of this quite often by setting defaults in the included scripts then overriding them as necessary in the product build script in order to customize build behaviour. For instance I often have generated files that are required before the build so I hook those targets into the BuildDependsOn property group. This way my generated files are generated whenever I do an F5 from the IDE, call the build target from the command line or otherwise build the project or solution. Obviously if you have any build steps that run long or only need to be run in special circumstances (like building installers), you'll want to take care about exactly what gets hooked in.