I've noticed, that using the npm search command does not guarantee complete results. Here is an example:
$npm search jasmine
does not list the jasmine-diff, jasmine-diff-reporter packages, while
$npm search jasmine diff
does.
I've read the doc, and there is no mention of any incompleteness, indeed it states
npm search performs a … search through package metadata for all files in the registry
I think, this somehow implies, that search should be consistent and complete. As one can see the jasmine-diff-reporter package does have the term jasmine in its keywords:
And it doesn't matter, that there is no word jasmine in the description section, since other packages like jasmine-diff have that word and are still missing in the $npm search jasmine output.
So could anyone explain this behavior somehow and/or suggest a workaround (except to use Google or something like that)?
The problem is the new "fast endpoint search" for the "npm search" that was implemented in the https://github.com/npm/npm/commit/e3229324d507fda10ea9e94fd4de8a4ae5025c75. I have registered a bug now: https://github.com/npm/cli/issues/1211.
I investigated npm scripts and found that the old search used the URL https://myNpmServer.com/repository/myNpmRegistry/-/all to get the package information while the new search uses https://myNpmServer.com/repository/myNpmRegistry/-/v1/search?text=%2F.*%2F&size=20. This value "20" is hardcoded, but you can change it with the --searchlimit=N parameter for the "npm search" and it is the simpliest workaround.
The only problem is that you never know how the search results are big. There is no value which means "infinity" (I tried to pass -1 and it did not work). If you really need the full search you can either refuse of "npm search" and parse directly the JSON output of the https://myNpmServer.com/repository/myNpmRegistry/-/all or you can hack the file <NodeInstallationDir>/lib/node_modules/npm/lib/search.js and add your own parameter --oldsearch:
if (npm.config.get('oldsearch')) {
allPackageSearch(searchOpts).on('data', function (pkg) {
entriesStream.write(pkg)
}).on('error', function (e) {
entriesStream.emit('error', e)
}).on('end', function () {
entriesStream.end()
})
} else {
esearch(searchOpts).on('data', function (pkg) {
entriesStream.write(pkg)
!esearchWritten && (esearchWritten = true)
}).on('error', function (e) {
if (esearchWritten) {
// If esearch errored after already starting output, we can't fall back.
return entriesStream.emit('error', e)
}
log.warn('search', 'fast search endpoint errored. Using old search.')
allPackageSearch(searchOpts).on('data', function (pkg) {
entriesStream.write(pkg)
}).on('error', function (e) {
entriesStream.emit('error', e)
}).on('end', function () {
entriesStream.end()
})
}).on('end', function () {
entriesStream.end()
})
After that you can say "npm search --oldsearch --registry ... '/regexp/'" and it must display really all packages.
ADDITIONAL NOTE (nice to know it):
Please be aware that during manipulating with .js scripts inside Node installation (adding there your printouts etc.) you can achieve an error message
npm ERR! invalid value written to input stream
After that something gets broken and "npm search" stops working at all or displays really few output. In order to repair this just keep adding other printouts until it fails again with the abovementioned message. Then the next run (only once) you see these messages:
npm WARN all-package-metadata cached-entry-stream Empty or invalid stream
npm WARN Failed to read search cache. Rebuilding
npm WARN Building the local index for the first time, please be patient
and it returns to proper state again. I did not investigate furhter why this happens and did not find a way to force invalidating this search cache.
I hope my investigations can become helpful for somebody.
Related
TL;DR
In Puppet Enterprise, how do I run a manifest (testpp.pp) from a task or plan (not Bolt).
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
$apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
class { 'base_windows::testpp': }
}
$apply_results.each | $result | {
notice($result.report)
}
}
apply_prep seems to succeed, but apply is failing with the following error:
{
"msg" : "Evaluation Error: Unknown function: 'report'. (file: /opt/puppetlabs/server/data/orchestration-services/code/environments/development/modules/base_windows/plans/testplan.pp, line: 16, column: 19)",
"kind" : "bolt/plan-failure",
"details" : {
"class" : "Bolt::PAL::PALError"
}
}
If I change the code to:
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
# Is this how to call a class? I cannot find an example.
class { 'base_windows::testpp': }
}
$apply_results.each |$result| {
$target = $result.target.name
if $result.ok {
out::message("${target} returned a value: ${result.value}")
} else {
out::message("${target} errored with a message: ${result.error.message}")
}
}
}
The plan tells me it has failed, but there are no errors in the node's report. In fact, there is no entry for the time the plan was executed.
I cannot find any examples on how to call a class from a plan, so the above apply() is a guess, based on this documentation.
I have installed the puppetlabs_reboot module and successfully ran a plan using it, therefore, I conclude my system is set up correctly, it's just my code that is wrong.
Background
I may be going about this all wrong, so here is some background to the problem. Currently, I have a series of manifests that install various packages from the public Chocolatey repository depending on a node's classification. Package definitions are stored in Hiera data and each package' version is set to latest. At the end of the Package{} resource, some manifests include a reboot.
These manifests are used to provision new nodes and keep existing nodes up-to-date with the latest package version.
The Puppet agent is set to run once per hour and if the source package is updated in the Chocolatey repo, on the next Puppet run, the manifest will update the package, rebooting the node, if required.
Goal
New nodes are provisioned with the latest package version.
Prevent package updates at undetermined times on existing nodes.
Continue to allow Puppet agent runs every hour.
Make use of existing manifests.
Ideas
Split out the package{} code from the profile manifest and place them in tasks / plans, allowing packages to be updated out-of-hours.
Specify the actual package version in Hiera. Although this is more declarative and idempotent, it means keeping an eye on over 100 package version. I guess it would be fairly simple to interrogate the Chocolatey repos with code to pull the latest version number, but even so I am no better off.
Create a task with a script that runs choco upgrade all, however, the next Puppet run would revert package versions according to the version defined in Hiera, meaning Hiera still needs to be kept up-to-date.
Problems
As per the main crux of this question, how do I run manifests (classes) from plans? If I understand correctly, tasks are for ad-hoc scripts, whereas plans can run tasks and manifests. As a lot of time has been invested in writing manifests, I would prefer not to rewrite all my manifests as scripts.
I am confused by the Puppet documentation as it seems to switch between PE and Bolt syntax. I am using Puppet Enterprise where Puppet says they don't recommend using Bolt but their examples seem to site Bolt commands.
No errors in the node' report. apply_prep() reports executed successfully, albeit taking far longer to execute than puppetlabs_reboot module, but apply() results in a failure, but nothing is logged in the node's reports.
Using puppetlabs_reboot module as a reference, it appears their plan uses a bunch of tasks. It appears that they don't use apply() to run their reboot{} class. Is this not duplicating the work?
If anyone has any suggestions or ideas, I'd be grateful if you could share.
I've got it to work. The class I was trying to run, required parameters that I hadn't provided!
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
class { 'base_windows::testpp':
filename => $filename,
contents => $contents,
}
}
}
# Output the whole result_set in the PE console
return $apply_results
I found this out using the logs.
Turn on debug level logging in /etc/puppetlabs/puppetserver/logback.xml (root level="debug")
Tail the following logs:
tail -f /var/log/puppetlabs/bolt-server/bolt-server.log
tail -f /var/log/puppetlabs/puppetserver/puppetserver.log | grep -B 5 -A 5 'testplan'
tail -f /var/log/puppetlabs/orchestration-services/orchestration-services.log
We are looking at requiring the Kubernetes extension as a dependency for one of our extensions, so we are guaranteed that the kubectl CLI is installed ahead of time. That said, it doesn't seem to be enough to "activate" an extension in a test (though it seems to work at runtime) to bring the dependency into play.
If I have a test like this:
let extension = vscode.extensions.getExtension(extensionId);
if (extension !== null && extension !== undefined) {
extension.activate().then(() => {
assert.ok(true);
done();
});
}
And it returns this:
rejected promise not handled within 1 second: Error: Unknown dependency 'ms-kubernetes-tools.vscode-kubernetes-tools'
stack trace: Error: Unknown dependency 'ms-kubernetes-tools.vscode-kubernetes-tools'
at p._handleActivateRequest (/home/travis/build/camel-tooling/vscode-camelk/.vscode-test/vscode-1.37.1/VSCode-linux-x64/resources/app/out/vs/workbench/services/extensions/node/extensionHostProcess.js:496:149)
at p._activateExtensions (/home/travis/build/camel-tooling/vscode-camelk/.vscode-test/vscode-1.37.1/VSCode-linux-x64/resources/app/out/vs/workbench/services/extensions/node/extensionHostProcess.js:496:607)
at p.activateByEvent (/home/travis/build/camel-tooling/vscode-camelk/.vscode-test/vscode-1.37.1/VSCode-linux-x64/resources/app/out/vs/workbench/services/extensions/node/extensionHostProcess.js:494:635)
at I._activateByEvent (/home/travis/build/camel-tooling/vscode-camelk/.vscode-test/vscode-1.37.1/VSCode-linux-x64/resources/app/out/vs/workbench/services/extensions/node/extensionHostProcess.js:760:680)
at I._handleEagerExtensions (/home/travis/build/camel-tooling/vscode-camelk/.vscode-test/vscode-1.37.1/VSCode-linux-x64/resources/app/out/vs/workbench/services/extensions/node/extensionHostProcess.js:764:710)
at define._startExtensionHost._readyToStartExtensionHost.wait.then.then (/home/travis/build/camel-tooling/vscode-camelk/.vscode-test/vscode-1.37.1/VSCode-linux-x64/resources/app/out/vs/workbench/services/extensions/node/extensionHostProcess.js:768:386)
Is there a way to test that the dependent extension is pulled in correctly? Can we trigger an installation of the dependent extension prior to testing activate? Can we mock this somehow?
Thanks.
This question is NPM specific.
Few years ago I wrote a tool named qnp that downloads entire list of npm packages and then executes local queries very fast, like 0.2 second per query. This allows to perform a very interesting study of the modern programming world, filtering by author names, descriptions, tags, etc, doing hundreds of queries, inspecting results, analyzing, having ideas, doing more queries. An official client is good, but does not allow you to do very fast queries at the speed of thought. Here is my question:
About a year ago the location of the registry metadata DB of NPM was abandoned, now it returns an empty file. How can I download/fetch the entire list of metadata now? I need at least those fields: title/author/description/keywords/date. Optionally downloads count, dependencies list, version.
Here is the code that was working previously:
var request = http.get({
host: 'registry.npmjs.org',
path: '/-/all/static/all.json',
headers: {
'Accept-Encoding': 'gzip, deflate'
}
}, function (a,b,c) {
var done = 0 ; var all = parseInt(a.headers['content-length'])
a.on('data', function (a,b,c) {
done += a.length
process.stdout.write( '\r' + (done / (all/100)).toFixed(2)+'% ' )
})
console.log('download started')
a.pipe(S)
S.on('finish', function (a,b,c) {
console.log('download complete')
S.close(f)
})
})
Since this post came up near the top when I searched for the answer, let me point out two packages that might be helpful for people landing on this old question:
https://github.com/nice-registry/all-the-package-names
https://github.com/bconnorwhite/all-package-names
Using the first one, I downloaded a list with 2.247.694 entries by using
pnpx all-the-package-names > ~/temp/all-the-package-names.txt
where pnpx is pnpm's equivalent to npx, the npm CLI runner (the latter installed with NodeJS).
I am playing with Gulp.js & npm recently, it's great. However, I do not really get the idea of npm as a package manager for packages which will get pushed for dist.
Let's go with an example.
I want to download the latest jquery, bootstrap and font-awesome so I can include them into my project. I can simply download them from their websites and get the files to include. Another option seems to be a packet manager, i.e. NPM.
However, my node_modules directory is huge due to other packages such as gulp, and it's not nested at all. What would be the easiest way to move selected packages to another dir - for example src/vendors/
I was trying to achieve that by gulp task simply copying specified files from node_modules and moving them to a specified dir, nonetheless in the long run it's almost the same as manually copying files since I have to specify not only the input directory, but also the output directory for each single package.
My current solution:
gulp.task('vendors', function() {
var jquery = gulp.src(vendors.src.jquery)
.pipe(gulp.dest(vendors.dist.jquery));
var bootstrap = gulp.src(vendors.src.bootstrap)
.pipe(gulp.dest(vendors.dist.bootstrap));
return merge(jquery, bootstrap);
});
vendors = {
src: {
jquery: 'node_modules/jquery/dist/**/*',
bootstrap: 'node_modules/bootstrap/dist/**/*'
},
dist: {
jquery: 'src/resources/vendors/jquery',
bootstrap: 'src/resources/vendors/bootstrap'
}
}
Is there an option to do it faster and/or better?
There's no need to explicitly specify the source and destination directory for each vendor library.
Remember, gulp is just JavaScript. That means you can use loops, arrays and whatever else JavaScript has to offer.
In your case you can simply maintain a list of vendor folder names, iterate over that list and construct a stream for each folder. Then use merge-stream to merge the streams:
var gulp = require('gulp');
var merge = require('merge-stream');
var vendors = ['jquery/dist', 'bootstrap/dist'];
gulp.task('vendors', function() {
return merge(vendors.map(function(vendor) {
return gulp.src('node_modules/' + vendor + '/**/*')
.pipe(gulp.dest('src/resources/vendors/' + vendor.replace(/\/.*/, '')));
}));
});
The only tricky part in the above is correctly figuring out the destination directory. We want everything in node_modules/jquery/dist to end up in src/resources/vendors/jquery and not in src/resources/vendors/jquery/dist, so we have to strip away everything after the first / using a regex.
Now when you install a new library, you can just add it to the vendors array and run the task again.
The new version AIR gives us the ability to globally capture run time errors and handle them. The problem is that it doesn't have the stacktrace or any helpful information about the error other than the error id and error message and name. For example it may tell me that a null pointer exception has happened but it will not tell me where or which method or anything. The debug version of the runtime gives us all of that but when the app is deployed to customers it is not running on the debug version so none of the useful information is available. I was wondering if this group has any suggestions on how to enable better logging of errors in an AIR app for better supportability of the product. Any suggestions?
I have a little hack to get line numbers too. :)
make a listener to get uncaught errors. I do it in my main class:
private function addedToStageHandler(event:Event):void {
loaderInfo.uncaughtErrorEvents.addEventListener( UncaughtErrorEvent.UNCAUGHT_ERROR, uncaughtErrorHandler );
}
for example my listener with error.getStackTrace():
private function uncaughtErrorHandler( event:UncaughtErrorEvent ):void
{
var errorText:String;
var stack:String;
if( event.error is Error )
{
errorText = (event.error as Error).message;
stack = (event.error as Error).getStackTrace();
if(stack != null){
errorText += stack;
}
} else if( event.error is ErrorEvent )
{
errorText = (event.error as ErrorEvent).text;
} else
{
errorText = event.text;
}
event.preventDefault();
Alert.show( errorText + " " + event.error, "Error" );
}
Add additional compiler argument: -compiler.verbose-stacktraces=true
Create the release build.
now the little hack:
Mac:
Go to the installation location where you have your .app file. Right click and choose show package content. Navigate to Contents ▸ Resources ▸ META-INF ▸ AIR. There you can find a file called hash. Duplicate the hash file and rename it to debug. Open the debug file with some text editor and remove the content. Done now you get the stack trace + line numbers.
Windows:
Browse to its install directory in a file explorer. Navigate to {app-folder}▸META-INF▸AIR▸. Here you can find a file called hash. Duplicate the hash file and rename it to debug. Open the debug file with some text editor and remove the content. Done now you get the stack trace + line numbers.
If you can't find the hash file, just create an empty file without file extension and call it debug.
Tested with Air 3.6!
No way until new version of AIR supports it. It doesn't now because of performance issues, rendering global handler almost useless. I'm waiting for it too, because alternative is logging everything yourself, and this is very time consuming.
The compiler option:
compiler.verbose-stacktraces=true
should embed stack information even in a non-debug build.
About the compiler option.
I develop with IntelliJ IDEA 14. In my build options i have also "Generate debuggable SWF". Maybe thats the reason why its working. Check my attachment.
Grettings