IBM uDeploy client import request problem - command-line-tool

I have had a PowerShell scripted call to the IBM uDeploy command line client (udclient) in my TFS CI build process for some time now.
My udclient call is scripted like so:
udclient.cmd -weburl $uDeployServer -authtoken $authToken "importVersions" $requestJson
... and my JSON file ($requestJson) content looks like this:
{
"component": "[uDeploy component name]",
"properties": {
"version": "[component version]"
}
}
These requests, and subsequent udclient version deploy requests, have been working as expected until recently. However, a couple of weeks ago, the version import requests started to fail mysteriously.
In the uDeploy UI, in the Version Import History tab in the Component Configuration, I can see the failed Import Requests.
However, when I open the Output Log for inspection, it is empty.
The Error Log contains only this:
"The version import failed for the following reasons:
JSONObject["value"] not found."
Manual version import from the uDeploy UI still works as expected.
Also, once manual intervention has been applied to complete the version import in the CI build, the subsequent version deploy request executes without any problems.
I'm not expert in Java but the error seemed to me to suggest something was amiss with the JSON file. However, to test my JSON (I'm using PS 5, no Test-Json until PS 6), execution of the following PowerShell script:
try {
$json = Get-Content -Path [component version import].json | ConvertFrom-Json
Write-Host "JSON is valid."
} catch {
Write-Host "JSON is dodgy."
}
... returns:
JSON is valid.
So, what's going on? Could it be something to do with encoding in the JSON file or some such??
Ideas and insights appreciated; thanks for looking.

I scripted the REST API call for native PowerShell:
Invoke-RestMethod -Uri $uDeployServer/cli/component/integrate" -Method Put -Headers $headers -ContentType "application/json" -Body $json
The request was sent without issue but sadly, as with the udclient call, the same error persisted.
Looking at the failed version import request record in the uDeploy UI, aside from the vague message in the error log, the request Input Properties showed two properties only (successful requests show many properties from the component configuration):
version (value correctly read from the JSON file provided)
description (value blank)
I added a new property 'description' to my request JSON; the file content now looks like this:
{
"component":"[uDeploy component name]",
"properties":{
"version":"[component version]",
"description":"[description]"
}
}
Hey Presto!
Version Import request executes successfully.

Related

SvelteKit breaks npm's import mechanism

I've written several npm library projects, and this is the way I import symbols in one JS file from another JS file, but it won't work in the script section of a svelte file:
My 'package.json' file has a name field (e.g. set to '#jdeighan/something`) and an 'exports' section with entries like "./utils": "./src/lib/utils.js". Then in any other JS file I can import symbols from utils.js with "import {somesymbol} from '#jdeighan/something/utils'. It's how to do imports from a library that you've installed with 'npm install', but it also (cleverly) works inside the project itself. But in a svelte file, this won't work - I get the error message "Failed to resolve import "#jdeighan/something/utils" from "src\routes+page.svelte". Does the file exist?". Here is what I have in my svelte file:
<script>
import {somesymbol} from '#jdeighan/something/utils';
</script>
I know that svelte has a handy $lib alias, but I'd prefer to use the npm standard mechanism, but it seems to be broken when using SvelteKit (not sure about using plain svelte)
I'd prefer to use the npm standard mechanism
This is absolutely not the standard mechanism. I have never seen people import from the current project by package name. While this is supported by Node itself, nothing else seems to support it, including e.g. the VS Code language server which will be unable to provide code navigation.
Using the name makes it less clear that the import is local and not a separate dependency and if the name were to be changed it would have to be adjusted everywhere.
I would recommend just not doing that. SvelteKit has $lib predefined as a default to provide essentially the same functionality in a convention-based way that actually works.
If you create a project with just these 3 files, then execute node foo.js in a console window, you get "Hello, World!":
package.json:
{
"name": "#jdeighan/something",
"type": "module",
"version": "1.0.0",
"exports": {
"./utils": "./utils.js"
}
}
foo.js:
import {message} from '#jdeighan/something/utils'
console.log(message);
utils.js
export let message = 'Hello, World!';

Angular CLI unit tests with PhantomJS - Unexpected token 'const'

Environment
#angular/cli#1.0.0
karma-phantomjs-launcher#1.0.4
node#6.9.4
npm#3.10.10
Issue
I've just added a service to my project which imports a function from a node_module library. Now, when I try to run my tests I get the following output.
SyntaxError: Unexpected token 'const'
at webpack:///~/print.js/src/js/print.js:4:0 <- src/test.ts:21436
PhantomJS 2.1.1 (Mac OS X 0.0.0) ERROR
SyntaxError: Unexpected token 'const'
at webpack:///~/print.js/src/js/print.js:4:0 <- src/test.ts:21436
A sample service looks like this
import {Injectable} from '#angular/core';
import printjs from 'print.js/src';
#Injectable()
export class PrintService {
constructor() {}
testPrint(url: string): void {
printjs(url);
}
}
I get this issue when I run
ng test --browsers PhantomJS
My tsconfig.json and src/tsconfig.spec.json have the target set to 'es5' which seems to be the most common cause of this issue.
I have read a number of posts including the following but haven't found anything that has worked. Is there anything else I can do?
SyntaxError: Unexpected token 'const' for testing.es5.js
https://github.com/angular/angular-cli/issues/5185
Having same issue also read many post but no luck !
Then i use babel. it take your code and make it compatible to your browser.
This link may help...
click here

IntelliJ run vs running a jar, with a Springboot Kotlin, Multi module Gradle project with Social Oauth2

TL;DR: Why does everything run fine when started via IntelliJ, and why is it broken when call java -jar app.jar. And how do I fix this?
Alright, I have some issues with a backend I am trying to dockerize. I have an application created with Spring Boot (1.4.2.RELEASE) following the Spring Oauth (2.0.12.RELEASE) guide on their page. I follow the Gradle version, since I prefer Gradle over Maven. Also I am using Kotlin instead of Java. Everything is fine, I start via IntelliJ my backend with static front end, I can login via Facebook (and Google and Github), I receive a nice Principal witch holds al the information I need, and I can modify Spring Security to authorize and permit endpoints. So far so good.
Now for the bad part, when I run either ./gradlew clean build app:bootrun or ./gradlew clean build app:jar and run the jar via java -jar (like I will do in my Docker container), my backend comes up. My static front end pops up. Now I want to login via Facebook, I end up on the Facebook login page, I enter my credentials, and... nothing!
I end up back on my homepage, not logged in, no log messages that mean anything to me, just silence. The last thing I see in the log is:Getting user info from: https://graph.facebook.com/me
This Url will give me in my browser:
{
"error": {
"message": "An active access token must be used to query information about the current user.",
"type": "OAuthException",
"code": 2500,
"fbtrace_id": "GV/58H5f4fJ"
}
}
When going to this URL via an IntelliJ start, it will give me credential details. Obviously something is going wrong, but I have no clue what. Especially since a run from IntelliJ works fine. There is some difference between how the jar is started, and how IntelliJ's run config works, but I have no clue where to search for what. I could post trace logging, or all my Gradle files, but perhaps thats too much info to put in 1 question. I will defenitly update this question if someone needs some more details :)
The structure outline of this project is as follows:
root:
- api: is going to be opensourced later, contains rest definitions and DTOs.
- core: contains the meat. Also here is included in the gradle file
spring-boot-starter, -web, -security, spring-security-oauth2, and some jackson stuff.
- rest: contains versioned rest service implementations.
- app: contains angular webjars amongst others, the front end, and
my `#SpringBootApplication`, `#EnableOAuth2Client`
and the impl of `WebSecurityConfigurerAdapter`.
Why does everything run fine when started via IntelliJ, and why is it broken using bootRun or the jar artefact. And how do I fix this?
I found it, the problem was not Multi module Graldle, Spring boot, or Oauth2 related. In fact it was due to a src set config of Gradle, where Java was supposed to be in a Java src set folder, and Kotlin in a Java src set folder:
sourceSets {
main.java.srcDirs += 'src/main/java'
main.kotlin.srcDirs += 'src/main/kotlin'
}
As Will Humphreys stated in his comment above, IntelliJ takes all source sets, and runs the app. However, when building the jar via Gradle, these source sets are stricter. I had a Java file in my Kotlin src set, which is no problem for IntelliJ. But the jar created by Gradle takes into account the source sets as defined in the build.gralde file, which are stricter.
I found my missing bean issue with the code below:
#Bean
public CommandLineRunner commandLineRunner(ApplicationContext ctx) {
return args -> {
System.out.println("Let's inspect the beans provided by Spring Boot:");
String[] beanNames = ctx.getBeanDefinitionNames();
Arrays.sort(beanNames);
for (String beanName : beanNames) {
System.out.println(beanName);
}
};
}
The Bean I missed was called AuthenticationController, which is a #RestController, and kinda crucial for my authentication code.

I am getting a "Bad response from Chimp Server" in my console when trying to run a meteor app with velocity/cucumber testing on it

The error is not in my regular console, it's in my tail -f console. It won't run the tests at all. In my localhost:3000 velocity sidebar it also says chimp server error. I am not sure how to fix this, I am very new to velocity and cucumber so it could be a stupid mistake, but I couldn't find any information on this error anywhere.
could you provide us with the whole meteor log and also cucumber log? If possible - please do meteor reset first (be aware though that this will clean up your local mongodb, if you want to avoid that, at least clean the cucumber log - echo '' > filePath will work )
I ran into the same problem yesterday while trying to follow Josh Owen's now outdated cucumber tutorial. The error was coming from with the step definition file while not wrapping module.exports in a function like this:
(function() {
module.exports = function() {
// ...
}
});
It could also be that your test directory and files aren't structured correctly in your app.
It should look like this:
app/
tests/
cucumber/
features/
step_definitions/
my_steps.js
my_feature.feature
fixtures/
my_fixture.js
Let me know if that makes a difference. Also, this is a good place to start with velocity/cucumber: http://www.mhurwi.com/a-basic-cucumber-meteor-tutorial/
It's very basic but there isn't much out there for learning testing with Meteor.

Titanium ACS issue

trying to create an ACS server using Titanium Studio. Following the example of pixgrid (https://github.com/appcelerator/pixgrid/), but always get an error when trying to run locally; console output:
[INFO] Installing dependencies...
[INFO] Dependencies installed.
[INFO] socket.io started
[ERROR] Error occurred. TypeError: Cannot call method 'init' of undefined
at Object.start (/app.js:8:7)
app.js listing:
var ACS = require('acs').ACS,
logger = require('acs').logger,
express = require('express'),
partials = require('express-partials');
// initialize app (setup ACS library and logger)
function start(app) {
ACS.init('***', '***');
logger.setLevel('DEBUG');
//use connect.session
app.use(express.cookieParser());
app.use(express.session({ key: 'node.acs', secret: "secret" }));
//set favicon
app.use(express.favicon(__dirname + '/public/images/favicon.ico'));
//set to use express-partial for view
app.use(partials());
//Request body parsing middleware supporting JSON, urlencoded, and multipart
app.use(express.bodyParser());
}
// release resources
function stop() {
}
Ofcourse I have my OATH key and secret at the ***. Same when running from command line (acs run).
I am running Titanium Studio, build 3.4.1.201410281727.
I can however publish the service, and then run it from the cloud without any issues. For development this is not ideal, so want to run it locally (local node.ACS server).
I guess there must be something wrong with where things are installed (only used default), or permissions. Anyone that got a clue how to fix this? Have spent some hours now searching the internet, but seem to be the only one with this exact problem. No clue what else to try.
Thanks for reading this far. If you require more information to help me, let me know.
Ok, I found the problem. They changed the way to use ACS in the last upgrade.
Classic mode was:
var ACS = require('acs').ACS;
ACS.init('<ACS Key>', '<ACS secret');
Now they changed it and ACS is a "module", like any other one, so you must use the new way. In the package.json file add it as a dependecy:
"dependencies": {
"acs-node": ">=0.9.2"
}
Install it: npm install acs-node
Now you can use it in its new format, on the app.js file:
var ACS = require('acs-node');
ACS.init('<App Key>');
It's all explained here: http://docs.appcelerator.com/cloud/latest/#!/guide/node_acs