I followed the tutorials online try to design ETL process on my local machine and upload it to slave machine for execution.
I did the following things:
Setting up one Slave server under "Slave server" folder
Right click and monitor this slave server
I can monitor the sample ktr running on my slave server which is a sign of my correct connecting with slave server
But...
Click Run arrow
Tick "Execute remotely", choose the created slave server as Remote host
"Launch"
Gets an error:
Unable to Connect to Server
You don't seem to be getting a connection to the server. Check the path you're using and make sure the server is up and running.
Which I can not understand, because I can monitor the slave server in real time.
If I tick also "Pass export to remote server" then I get another error
HTTP Status 404 - /kettle/registerPackage/ - Not Found
netstat -nltp result on remote server
tcp6 0 0 :::8181 :::* LISTEN -
PS
I'm using PDI 6.0 Downloading from SourceForge.
You can reach my test carte server through:
http://52.19.57.94:8181/
Had same problem. Issue turned out to be that if you are not using Pentaho Server (Enterprise), but "only" carte, you shouldn't provide anything in the Slave Server setup field "Web App Name"
I also had this issue, it turned out that I had different versions of Spoon and Carte. Spoon v5.4 and Carte v5.3, I used Spoon v5.3 instead and it resolved both the "Unable to connect ..." and the "HTTP Status 404 ..." issue.
You're using v6 so not sure if this applies.
Related
I have a java application which is currently deployed in weblogic clustered environment with 2 managed servers. Would like to enable remote debugging for me to investigate further the issue on session data replication. I followed the steps provided here. After restarting the 2 managed servers, it seems the configuration has no effect. I used the same debug config below in my 2 managed servers.
-Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=8457,server=y,suspend=n
I also tried adding the below line on each managed server startup script.
export debugFlag=true
export DEBUG_PORT=8457
I use telnet command and got this error: Could not open connection to the host, on port 8457: Connect failed.
Linux firewall is already disabled.
Has anyone already encountered this issue? In addition, how to use it for me to enable debug in my IDE (e.g. Eclipse/IntelliJ)
Thanks in advance for the help.
This is already resolved. As advised by #devwebcl, I put the additional script below in my startManagedWebLogic.sh
export JAVA_OPTIONS="${JAVA_OPTIONS} -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=8457,server=y,suspend=n"
I put the same argument (e.g. -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=8457,server=y,suspend=n) in the Server Start argument section on each managed server in weblogic admin console.
It will make sure that either you start your managed server via weblogic admin console or via shell startup script of each managed server, the same argument will be picked up.
I am new to Mesos and just finished setting up mesos and along with zookeeper on my test server.
Unfortunately I keep getting this error message on my mesos console indicating i am unable to connect to mesos on port 5050 and can't seem to figure out why.
I have included the error in the screen shot below
The mesos log files doesn't point to why the error is showing either.
I resolved the problem by this:
./bin/mesos-master.sh --ip=x.x.x.x --work_dir=/var/lib/mesos --hostname=x.x.x.x
We can avoid this problem by starting mesos-master with following option:
--ip=xx.xx.xx.xx --hostname_lookup=false
I have resolved this problem. Open the web page in Chrome, and open the developer tool, you will see the chrome is accessing the web site with domain, in my case the domain name is "mesosphere", as there is no mesosphere in dns, so the accessing was failed.
I solved the problem by adding the mesosphere in the hosts file, C:/windows/system32/etc/hosts/
If you use the domain name for the Mesos cluster you must set the domain name in windows hosts.
There can be multiple issues here.
Is your mesos-master running and healthy ?
Has leader election process completed, if all is good.
Check if you are able to do
ping leader.mesos
If above ping doesn't work, that means leader has not been elected. First fix that.
I had this problem also. Luckily, I have a running mesos server also. So, I can compare the different between my demo and the running mesos server. I captured the packets between client and server in my demo. I found the explorer didn`t resend fresh request, only some keepalive packets.
but, when I catch the packets in the running mesos server, I found the explorer send get request frequently. like the image
I think, if you run some task or add some agent, maybe it will activate the explore to send request frequently. Then the "Failed to connect" will disappeared.
I was having the same issues and what fixed it for me was the zookeeper configuration. In my case I was using the EC2 public IP Address rather than the private one. Once I changed the /etc/mesos/zk file to zk://<private IP>:2181/mesos I was able to connect without the constant error messages. In other words, zookeeper was reporting to be running in one IP and mesos-master was trying to connect using a different IP.
My configuration was correct as suggested. But failed to start mesos-master service. But There is alternative way to start mesos-master node with exact same configuration. Commands to start mesos-master
$ cd /usr/sbin [or mesos_installation directory/bin]
$sudo ./mesos-master --work_dir=/var/lib/mesos --log_dir=/home/rajeev/logs/mesos/
Its start mesos-master service successfully for me.
After successful installation of Windows azure pack I am trying to install Windows Azure Pack: Web Sites v2 U6.
I am able to install most of the servers to the Website controller. All below servers are in ready state so there is no issue of installation error.
Management server
Front end server
Publication server
File server.
but while adding Worker server, it's added perfectly and start the installation process. At stage when it reach at the installation it seems like it is not getting some connection string. The information is below.
Start service: rsfilter.
Service rsfilter is running.
Configure Idle Pageout feature.
Completed configuration of Idle Pageout feature.
Take ownership for file C:\Windows\system32\Drivers\http.sys.
Configure DWAS Files location to path 'C:\DWASFiles'
File caching is turned off
Execute command 'powershell.exe Import-Module NetQoS; $policy = Get-NetQosPolicy -PolicyStore ActiveStore | Where-Object { $_.Name -eq 'udplimit' }; if (!$policy) { New-NetQosPolicy -name 'udplimit' -ThrottleRateActionBitsPerSecond 65536 -IpProtocol UDP -PolicyStore ActiveStore -ea Stop }'
Setup database connection string for server WAPSQL .
Setup data service credentials.
Stop service: WAS.
Service 'WAS' is stopped.
Set IPv4 dynamic port range, with starting port 30000, and number of ports 35536.
Execute command 'netsh.exe int ipv4 set dynamicport tcp start=30000 num=35536'
Start service: dwassvc.
Service dwassvc is running.
WorkerManagementService started. Ready to receive ConnectionString and DataServiceCredentials.
Waiting for worker connection string. Attempt number is 12.
Waiting for worker connection string. Attempt number is 24.
Waiting for worker connection string. Attempt number is 36.
Waiting for worker connection string. Attempt number is 48.
I also tried to repair Frontend server as mention in on Microsoft forum but it will not help.
Here's Microsoft forum link
Any trick or guideline will be appreciated
Thanks,
Dharmendra
I have found that my Front end server is not accessible from management server and web site controller. It is DNS server that not resolving named correctly (Accessed by IP address but not accessed by server name). Also by using klist tickets I have fount that front end server doesn't have Kerberos ticket available on the list.
For the solution what I have done is, I removed front end server from Web site controller and after that I remove it from Domain controller. Then I rejoin it again to domain. After that I tried it to access it by using it's name from management server and website controller. For example,\FRONTEND\C$. Then I first added FrontEnd server and then Web Worker Server and now it get's connection string properly and added with "Ready" status.
Hope this will help someone.
Thanks you very much for point out. It is hectic from more then a week.
Regards,
Dharmendra
I had a perfectly running remote redis-server on my LAN (# x.x.x.x:p). I was able to use the RedisDesktopManager to view the server contents. However, there were a lot of dangling connections to my server from different clients (subscribed to channels) which I wanted to close so I used the SHUTDOWN command from the console of the desktop manager. Since them, I am unable to connect to the remote server. The desktop manager is running and lets me add a new redis-server connection but I am not able to connect to the previous server connection. Whenever I try to connect to the server # x.x.x.x:p, the desktop manager terminates. I have not made any configuration changes to the previously running server so I am sure that I am not making any mistake with port bindings. Any help on what I am missing will be really appreciated.
Thanks in advance!
The problem was with the configuration binding. The bind line in the configuration file was originally
bind 132.168.131.129 127.0.0.1 0.0.0.0. I changed it to bind x.x.131.129. The server connection was restored.
RabbitMQ starts up just fine, but the shovel plugin status is listed as "starting".
I'm using the following rabbitmq.config:
Each broker is running on a separate AWS instance. The remote server is windows 2008 server, the local server is Amazon Linux.
[{rabbitmq_shovel,
[{shovels,
[{scrape_request_shovel,
[{sources, [{broker,"amqp://test_user:test_password#localhost"}]},
{destinations, [{broker, "amqp://test_user:test_password#ec2-###-##-###-###.compute-1.amazonaws.com"}]},
{queue, <<"scp_request">>},
{ack_mode, on_confirm},
{publish_properties, [{delivery_mode, 2}]},
{publish_fields, [{exchange, <<"">>},
{routing_key, <<"scp_request">>}]},
{reconnect_delay, 5}
]}
]
}]
}].
Running the following command:
sudo rabbitmqctl eval 'rabbit_shovel_status:status().'
returns:
[{scrape_request_shovel,starting,{{2012,7,11},{23,38,47}}}]
According to This question, this can result if the users haven't been set up correctly on the two brokers. However, I've double-checked that I've set up the users correctly via rabbitmqctl user_add on both machines -- have even tried it with a different set of users, to be sure.
I also ran an nmap scan of port 5672 on the remote host to verify is was up and running on that port.
UPDATE Problem isn't solved but this does appear to be a result of connection problems with the remote server. I changed "reconnect_delay" to 0 in my config file, to avoid having shovel infinitely re-try the connection. Highly recommend others with this problem do this as well, as it allows you to get error messages out of rabbit_shovel_status. In my case I got the following error:
[{scrape_request_shovel,
{terminated,
{{badmatch,{error,access_refused}},
[{rabbit_shovel_worker,make_conn_and_chan,1},
{rabbit_shovel_worker,handle_cast,2},
{gen_server2,handle_msg,2},
{proc_lib,init_p_do_apply,3}]}},
{{2012,7,12},{0,4,37}}}]
Answering my own question here, in case others encounter this issue. This error (and also a timeout error if you get it, {{badmatch,{error,etimedout}}, ), is almost certainly a communications problem between the two machines, most likely due to port access / firewall settings.
There were a couple of dumb things I was doing here:
1) Was using the wrong DNS for my remote EC2 instance (D'oh! really dumb -- can't tell you how long I spent banging my head against the wall on this one...). Remember that stopping and starting your instance generates a new DNS, if you don't have an elastic IP associated with the instance.
2) My remote instance is a windows server, and I realized you have to open up port 5672 both in windows firewall and in EC2 security groups -- there are two overlapping levels of access controls here, and opening up the port in the EC2 management console isn't sufficient if your machine is windows server on EC2, as you also have to configure the windows server firewall.