OTP program argumets and warnings - intellij-idea

I've installed OTP using IntelliJ IDEA. I downloaded GTFS and OSM data from Berlin and I create a main configuration to build the graph and the server. Once I run it, it gives me the following warnings:
15:26:18.781 INFO (Graph.java:805) Main graph size: |V|=844791 |E|=1965688 15:26:18.781 INFO (Graph.java:806) Writing graph C:\Users\paula\Desktop\berlindata\Graph.obj ... 15:26:32.793 INFO (Graph.java:844) Graph written. 15:26:32.889 INFO (GraphBuilder.java:171) Graph building took 4,4 minutes. 15:26:32.891 INFO (GraphScanner.java:81) Attempting to automatically register routerIds [route] 15:26:32.891 INFO (GraphScanner.java:82) Graph files will be sought in paths relative to C:\Users\paula\Desktop\berlindata\graphs 15:26:32.893 INFO (GraphService.java:189) Registering new router 'route' 15:26:32.893 WARN (InputStreamGraphSource.java:218) Graph file not found or not openable for routerId 'route': java.io.FileNotFoundException: C:\Users\paula\Desktop\berlindata\graphs\route\Graph.obj (El sistema no puede encontrar la ruta especificada) 15:26:32.893 WARN (InputStreamGraphSource.java:144) Unable to load data for router 'route'. 15:26:32.893 WARN (GraphService.java:198) Can't register router ID 'route', no graph. 15:26:32.909 INFO (GrizzlyServer.java:50) Starting OTP Grizzly server on ports 8080 (HTTP) and 8081 (HTTPS) of interface 0.0.0.0 15:26:32.909 INFO (GrizzlyServer.java:52) OTP server base path is C:\Users\paula\Desktop\berlindata 15:26:33.994 WARN (PropertiesHelper.java:330) There is no way how to transform value "true" [java.lang.Boolean] to type [java.lang.String]. 15:26:34.088 INFO (NetworkListener.java:750) Started listener bound to [0.0.0.0:8080] 15:26:34.151 INFO (NetworkListener.java:750) Started listener bound to [0.0.0.0:8081] 15:26:34.151 INFO (HttpServer.java:300) [HttpServer] Started. 15:26:34.151 INFO (GrizzlyServer.java:130) Grizzly server running.
I think my problem is that I'm not writing properly the program arguments. I wrote that:
program arguments
Do you know what I should change?
Thank you

When you add the "--router route" parameter to the OpenTripPlanner arguments, once the graphs are built OpenTripPlanner will search for a folder called "route" in the <basePath>\graphs\ folder.
Because you are building the graphs in the basePath folder, you should be able to add the argument "--autoScan" and remove the argument "--router route" and OpenTripPlanner should pick detect the Graph.obj file automatically.
Alternatively, you can move the "Graph.obj" to a folder called <basePath>\graphs\route and keep the arguments as they are. Keep in mind though, every time you run that command it will rebuild the OpenTripPlanner Graph.obj file, but load the graph in the <basePath>\graphs\route folder. I personally run two separate commands: one to build the OpenTripPlanner graph, and another to start the OpenTripPlanner server with the graph created in the previous step.
The OpenTripPlanner documentation should give you some advice, as well as using the "--help" argument into OpenTripPlanner.

Related

yarn usercache dir not resolved properly when running an example application

I am using Hadoop 3.2.0 and trying to run a simple application in a docker container and I have made the required configuration changes both in yarn-site.xml and container-executor.cfg to choose LinuxContainerExecutor and docker runtime.
I use the example of distributed shell in one of the hortonworks blog. https://hortonworks.com/blog/trying-containerized-applications-apache-hadoop-yarn-3-1/
The problem I face here is when the application is submitted to YARN it fails with a reason related to directory creation issue with the below error
2019-02-14 20:51:16,450 INFO distributedshell.Client: Got application
report from ASM for, appId=2, clientToAMToken=null,
appDiagnostics=Application application_1550156488785_0002 failed 2
times due to AM Container for appattempt_1550156488785_0002_000002
exited with exitCode: -1000 Failing this attempt.Diagnostics:
[2019-02-14 20:51:16.282]Application application_1550156488785_0002
initialization failed (exitCode=20) with output: main : command
provided 0 main : user is myuser main : requested yarn user is
myuser Failed to create directory
/data/yarn/local/nmPrivate/container_1550156488785_0002_02_000001.tokens/usercache/myuser
- Not a directory
I have configured yarn.nodemanager.local-dirs in yarn-site.xml and I can see the same reflected in YARN web ui localhost:8088/conf
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/data/yarn/local</value>
<final>false</final>
<source>yarn-site.xml</source>
</property>
I do not understand why is it trying to create usercache dir inside the nmPrivate directory.
Note : I have verified the permissions for myuser to the directories and also have tried clearing the directories manually as suggested in a related post. But no fruit. I do not see any additional information about container launch failure in any other logs.
How do I debug why the usercache dir is not resolved properly??
Really appreciate any help on this.
Realized that this is all because of the users the services were started with and the permissions to the directories the services work on.
After making sure the required changes are done, I am able to seamlessly run the examples and other applications..
Thanks Hadoop user community for the direction. Adding the link here for more details.
http://mail-archives.apache.org/mod_mbox/hadoop-user/201902.mbox/browser

TFS Build error - Cannot listen on pipe name 'xxx' because another pipe endpoint is already listening on that name

In TFS, i'm doing build for my .Net project. I've got agent configured locally and build is carried using that. Error says as follows
Cannot listen on pipe name 'net.pipe://localhost/taskagent/6/xxxxxx' because another pipe endpoint is already listening on that name.
Not sure, what this exactly is....Please help. Attached the error screenshot for reference.
Note: I'm not using any TDD/test process in code
According to the error info not sure if it's related to TFS side. Suggest you also manually run the build directly on the build agent.
Since the agent is newly configured, to narrow down if the error is related to your environment on the build server machine. You could also create a new build definition with a simple project such as hello world and check if got the same error. If so, suggest you delete the agent, reconfigure it follow this tutorial: Deploy an agent on Windows
Besides, you could also set system.debug=true to enable verbose debug for build to get more detail error info, please refer: Enable Verbose Debug Mode for TFS Build vNext

Unable to place a node in Node-RED

I have created a node, lets name it A.
Now I have successfully imported this into Node-Red on my raspberry pi using npm link A in the directory where the package.json file is, and gone to the Node-Red directory (~/.node-red), and used npm link A.
I say that I've successfully imported because when I go to the Manage Palette menu, the node is listed there. However, it's not located in the menu to the left, so I can't use the node.
Is there a straightforward way to fix this, or is this an indication that there is something wrong with the node itself (such as a syntax error or a faulty dependency)?
Node-RED will tell you which nodes it discovered but failed to load in the log at start up. It may show a useful error there, but it may just say it has failed to load.
The easiest way to check for basic syntax errors is to manually load them into nodejs. To do this in the directory with the node's .js file run the following:
$ node
> require('./foo.js')
(assuming the node's .js file is foo.js)

redmine:migrate_from_trac -stack level too deep error

After successful installation of redmine , trying to migrate the datas from trac to redmine . i am getting the following error.... . Any work around to fix this
user#user:~/redmine-2.3$ rake redmine:migrate_from_trac RAILS_ENV="production"
WARNING: a new project will be added to Redmine during this process.
Are you sure you want to continue ? [y/N] y
Trac directory []: /home/user/implementation Trac database adapter (sqlite3, mysql2, postgresql) [sqlite3]:
Trac database encoding [UTF-8]:
Target project identifier []: implementation
Migrating components.......................................................................................................................................................................................
Migrating milestones.......................................
Migrating custom fields...
Migrating tickets..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Migrating wiki.....
Components: 178/183
Milestones: 39/39
Tickets: 2082/2082
Ticket files: 0/421
Custom values: 2812/2812
Wiki edits: 5/5
Wiki files: 0/0
rake aborted!
stack level too deep
Tasks: TOP => redmine:migrate_from_trac
(See full trace by running task with --trace)
This is a typical stack overflow error, means a function is recursively called in an infinite loop. That is caused by a bug in that script, likely because your data is somehow corrupted and the script is not able to stand that.
Try to call the script with the --verbose flag, or check the log files for error messages. Try to find the error in your data by running the script testwise with reduced data input (e.g. without tickets).

Unable to start Active MQ on Linux

I am trying to get ActiveMQ server running on a RaspberryPI Debian Squeeze box and all appears to be installed correctly but when I try and start the service I am getting the following...
root#raspberrypi:/var/www/activemq/apache-activemq-5.7.0# bin/activemq
INFO: Loading '/etc/default/activemq'
INFO: Using java '/usr/jre1.7.0_07/bin/java'
/usr/jre1.7.0_07/bin/java: 1: /usr/jre1.7.0_07/bin/java:ELF0
4°: not found
/usr/jre1.7.0_07/bin/java: 2: /usr/jre1.7.0_07/bin/java: Syntax error: "(" unexpected
Tasks provided by the sysv init script:
restart - stop running instance (if there is one), start new instance
console - start broker in foreground, useful for debugging purposes
status - check if activemq process is running
setup - create the specified configuration file for this init script
(see next usage section)
Configuration of this script:
The configuration of this script can be placed on /etc/default/activemq or /root/.activemqrc.
To use additional configurations for running multiple instances on the same operating system
rename or symlink script to a name matching to activemq-instance-<INSTANCENAME>.
This changes the configuration location to /etc/default/activemq-instance-<INSTANCENAME> and
$HOME/.activemqrc-instance-<INSTANCENAME>. Configuration files in /etc have higher precedence.
root#raspberrypi:/var/www/activemq/apache-activemq-5.7.0#
It looks like there is an error somewhere but I am a fairly newbie at this and don't know where to look.
Just adding an answer since,becoz as per the documentation , the command is wrong
to start activemqm, use
Navigate to [installation directory/bin] and run ./activemq start or simple bin/activemq start
if you want to see it live in a window use bin/activemq console
To stop, you have to kill the process
The default ActiveMQ "getting started" link sends u here : http://activemq.apache.org/getting-started.html which is the "Getting Started Guide for ActiveMQ 4.x".
If you scroll down main documentation page, you will find this link with the proper commands :
http://activemq.apache.org/version-5-getting-started.html