Why is running Guard outside of bundler dangerous? - ruby-on-rails-3

When I run bundle exec guard everything is kosher, but if I try to run guard I get this:
WARNING: You are using Guard outside of Bundler, this is dangerous and could not work. Using `bundle exec guard` is safer.
Why is this?

From bundler official site:
Run an executable that comes with a gem in your bundle
$ bundle exec rspec spec/models
In some cases, running executables without bundle exec may work, if
the executable happens to be installed in your system and does not
pull in any gems that conflict with your bundle.
However, this is unreliable and is the source of considerable pain.
Even if it looks like it works, it may not work in the future or on
another machine. If you want a way to get a shortcut to gems in your
bundle
$ bundle install --binstubs $ bin/rspec spec/models
The executables installed into bin are scoped to the bundle and will
always work
I'm not sure whether there is anything specific about guard, but in general it's a good practice to run all your gems' executables via bundle exec. May be they just decided to warn developers that running guard without it might cause troubles (e.g. if you have different versions of guard in your system and in Gemfile).

Related

How to undo what was executed from an npx command package?

I installed bit on my Ubuntu 22.04 machine using npx #teambit/bvm install, which created an executable in my HOME/bin folder and an entry on my .zshrc
Now, I would like to know if anything else was installed, and how can I completely remove Bit from my machine.
Ideally, I would like to know which code was run when doing npx #teambit/bvm install
I use volta to install nodejs https://volta.sh/
Answering this question requires some context.
First and foremost, #teambit/bvm produces side-effects in the ~/.bvm/ directory (see code here - https://github.com/teambit/bvm). To delete Bit and BVM completely, you need to manually remove that directory.
In general, npx doesn't have a way to revert side effects by packages/commands you run through it (if they do produce any side effects).
There's no way to get npx to undo what any tool did, as npx doesn't force any boundaries on tools.
At the end of the day, you need to delete each tool per to its own instructions.
The only thing that npx does is creating the ~/bin/bvm (in the case of #teambit/bvm, for other tools naming will be different). This is a shortcut for the command that the package-manager puts in place. It's unrelated to bvm or bit. npx may also place things in global node_modules or do other npm-related things.

ownCloud Desktop Client Theming

I have been trying to build the desktop client for about a week now so that I can dig in and try to mess with some theming and turn off some features I wont be needing. In the process I have run into numerous issues and have managed to resolve them in one way or another. I have a VM running openSUSE and I have downloaded the source file ownCloudClient-2.3.2.tar.xz file and unzipped it all into my home/jwarren/client folder. I then ran:
cd admin/win/docker
docker build . -t owncloud-client-win32:
Which I was able to get through. Now I am on the second command:
docker run -v "$PWD:/home/user/client" owncloud-client-win32: \
/home/user/client/admin/win/docker/build.sh client/ $(id -u)
Here I am getting almost to the end and then I receive this error message which I cant figure out how to resolve.
CPack Error: Problem running NSIS command "user/bin/makensis"
CPack Error: Problem Compressing Directory
Can anyone help me out with this? Or maybe point me in the direction of better instructions for the ownCloud Desktop Client theming. I noticed that once you get it installed properly there is no instructions explaining where anything is to edit.
I wrote a comprehensive guide to build the Windows client using the cross-compilation toolchain in the Dockerfile some time ago in ownCloud's central: https://central.owncloud.org/t/error-using-docker-to-build-the-windows-client/5107/5
What you're probably missing out is the git-submodule initialization; i.e. running git submodule update --init on your unzipped repository. You need these to bundle on the installer some pre-compiled binaries used by the shell integrations.
Also, on a side note, there was some problems last month with mingw toolchain compiler (gcc7) - in case you get some dll error after installing the client with your self-generated installer, refer to https://central.owncloud.org/t/building-the-windows-installer/8403/4 for an snapshot of a fully working Docker image to use instead of your self-built.
About the docs to build your own theme: those can be found in https://doc.owncloud.org/branded_clients/branded_desktop_client/index.html (for enterprise installations of ownCloud) - and for an unsupported version, you can also check the source in
https://github.com/owncloud/client/blob/master/src/libsync/theme.cpp for some hints about what settings can be overwritten from there.

How to debug Webpack-dev-server (in memory) with WebStorm?

As per WebStorm they require that we debug against a dist directory as specified in:
https://blog.jetbrains.com/webstorm/2015/09/debugging-webpack-applications-in-webstorm/
however, as per Webpack recommended development process, we should be running webpack-dev-server, so its all in memory as in:
webpack-dev-server –inline –progress –colors –display-error-details –display-cached –hot –port=3000
so there is no dist directory, which contridicts examples posted #: https://blog.jetbrains.com/webstorm/2015/09/debugging-webpack-applications-in-webstorm/
Is there a way to have webpack-dev-server use dist dir so WebStorm can be mapped to it so we can use source maps for live debug?
FYI this is the project I am using to test:
https://github.com/ocombe/ng2-webpack
tx
Sean
Currently WebStorm needs the created Bundle + SourceMap from WebPack to analyze it and find the actual Breakpoint.
So in short, you can't debug WebPack applications just with the WebPack DevServer. However you can run the normal WebPack build with file watching in parallel to it: `
As you know, you will have to create a distribution/production bundle with source maps and then use that for debugging in WebStorm. Personally, I run tests with Karma while I have the webpack-dev-server running. Karma tests can be run with the debugger and that usually satisfies any of my debugging needs, while the webpack-dev-server provides my "manual test" to see how I'm doing.
I guess what I'm saying for your case... you can have the dev server running while, at the same time, having some kind of automated build with source maps running at the exact same time which you can run and use the debugger on. This can be intensive though so it depends on your memory and processing power.
I ended up using live-server https://github.com/tapio/live-server and followed this tutorial, worked... https://blog.jetbrains.com/webstorm/2015/09/debugging-webpack-applications-in-webstorm/ (just can't use the built in server in webpack, but that's ok)
I would add that you can put the statement
debugger;
in your javascript/typescript files even in framework files of angular or vue2 like *.vue
So even if your path mappings to URL don't work, it will step anyway.

On UnsatisfiedLinkError, clarification needed

When building the project from command line using mvn clean install everything builds without any issues.
When running some tests that use precompiled C libraries from IntelliJ, tests fail with java.lang.UnsatisfiedLinkError
I may be completely off here, but does IntelliJ not see the .so file? Is so, how can it be added please?
Shared library fails to load with UnsatisfiedLinkError if:
it's not in the working directory configured in the test run configuration.
it's not in PATH environment (on Mac Terminal and GUI apps have different environment, see this answer). Run IDEA from the Terminal open -a /Applications/IntelliJ\ IDEA\ 12.app/ to make environment the same.
it's not in the location specified using -Djava.library.path VM option.
.so depends on some other library that is not found for any of the 1-3 reasons (or the dependency of that dependency is not found, etc).

Why do I get 'divide by zero` errors when I try to run my script with Rakudo?

I just built Rakudo and Parrot so that I could play with it and get started on learning Perl 6. I downloaded the Perl 6 book and happily typed in the first demo program (the tennis tournament example).
When I try to run the program, I get an error:
Divide by zero
current instr.: '' pc -1 ((unknown file):-1)
I have my perl6 binary in the build directory. I added a scripts directory under the rakudo build directory:
rakudo
|- perl6
\- scripts
|- perlbook_02.01
\- scores
If I try to run even a simple hello world script from my scripts directory I get the same error:
#!/home/daotoad/rakudo/perl6
use v6;
say "Hello nurse!";
However if I run it from the rakudo directory it works.
It sounds like there are some environment variables I need to set, but I am at a lost as to what the are and what values to give them.
Any thoughts?
Update:
I'd rather not install rakudo at this point, I'd rather just run things from the build directory. This will allow me to keep my changes to my system minimal as I try out different Perl6 builds (Rakudo * is out very soon).
The README file encouraged me to think that this was possible:
$ cd rakudo
$ perl Configure.pl --gen-parrot
$ make
This will create a "perl6" or "perl6.exe" executable in the
current (rakudo) directory. Programs can then be run from
the build directory using a command like:
$ ./perl6 hello.pl
Upon rereading, I found a reference to the fact that it is necessary to install rakudo before running scripts outside the build directory:
Once built, Rakudo's make install target will install Rakudo
and its libraries into the Parrot installation that was used to
create it. Until this step is performed, the "perl6" executable
created by make above can only be reliably run from the root of
Rakudo's build directory. After make install is performed,
the installed executable can be run from any directory (as long as
the Parrot installation that was used to create it remains intact).
So it looks like I need to install rakudo to play with Perl 6.
The next question is, where rakudo be installed? README says into the Parrot install used to build.
I used the --gen-parrot option in my build, which looks like it installs into rakudo/parrot-install. So rakudo will be installed into my rakudo\parrot-install?
Reading the Makefile, supports this conclusion. I ran make install, and it did install into parrot_install.
This part of the build/install process is unclear for a newbie to Perl6. I'll see if I can up with a documentation patch to clarify things.
Off the top of my head:
Emphasize running make install before running scripts outside of build. This requirement is currently burried in the middle of a paragraph and can be easily missed by someone skimming the docs (me).
Explicitly state that with --gen-parrot will install perl6 into the parrot_install directory.
Did you run make install in Rakudo?
It's necessary to do it to be able to use Rakudo outside its build directory (and that's why both the README and http://rakudo.org/how-to-get-rakudo tell you to do it.
Don't worry, the default install location is local (in parrot_install/bin/perl inside your rakudo directory).
In response to your update I've now updated the README:
http://github.com/rakudo/rakudo/commit/261eb2ae08fee75a0a0e3935ef64c516e8bc2b98
I hope you find that clearer than before. If you still see room for improvement, please consider submitting a patch to rakudobug#perl.org.