I am using Grid.startNodes(java.util.Collection, java.util.Map, boolean, int, int)
as defined here: http://gridgain.com/api/javadoc/org/gridgain/grid/Grid.html#startNodes(java.util.Collection, java.util.Map, boolean, int, int)
Code I am using:
GridConfiguration cfg = GridCfgGenerator.GetConfigurations(true);
Grid grid = GridGain.start(cfg);
Collection<Map<String,Object>> coll = new ArrayList<>();
Map<String, Object> host = new HashMap<String, Object>();
//host.put("host", "23.101.201.136");
host.put("host", "10.0.0.4");
host.put("port", 22);
host.put("uname", "username");
host.put("passwd", "password");
host.put("nodes", 7);
//host.put("ggHome", null); /* don't state so that it will use GRIDGAIN_HOME enviroment var */
host.put("cfg", "/config/partitioned.xml");
coll.add(host);
GridFuture f = grid.startNodes(coll, null, false, 3600 * 3600, 4);
System.out.println("before f.get()");
f.get();
I ran the above code on a vm with a 10.0.0.7
I have remote desktop into the VM whos host IP is 10.0.0.4 and see no changes to state. The code completes and exits. Both VMs are able to run gridgain locally and can discover each other's nodes if I start it using bin/ggstart.bat
I can manually start a node on 10.0.0.4 (the machine I am trying to SSH into via this API). I can start said node by running $GG_HOME/bin/ggstart.bat $GG_HOME/config/partitioned.xml so there is no issue in the configuration file
I am not quite sure how to debug this as I get no errors
Successful completion of the future returned from startNodes(..) method means that your local node has established SSH session and executed a command for each node it was going to start. But successful execution of a command doesn't mean that a node will be actually started, because it can fail for several reasons (e.g., wrong GRIDGAIN_HOME).
You should check the following:
Are there GridGain logs created GRIDGAIN_HOME/work/log directory? If yes, then check them - there could be an exception during startup process.
If there are no new logs, there is something wrong with the executed command. The command can be found in the local node logs - search for "Starting remote node with SSH command: ..." lines. You can try to create an SSH connection in terminal, run this command and see what happens.
Also you may want to check your SSH logs to see whether there are any errors.
Related
I am using Apache VFS to upload a file to an SFTP server, if the file is newer than the file on the server or doesn't exist there yet. The server connection uses SSH Keys for Authentication.
I am using the following java code (plus error handling etc.) to connect to the server and check the file modification date-time:
DefaultFileSystemManager manager = new DefaultFileSystemManager();
manager.addProvider("sftp", new SftpFileProvider());
manager.init();
FileSystemOptions opts = createDefaultOptions();
BytesIdentityInfo identityInfo = new BytesIdentityInfo(server.sshKey.getBytes(), null);
SftpFileSystemConfigBuilder.getInstance().setIdentityProvider(opts, identityInfo);
remoteFileObject = manager.resolveFile(new URI("sftp",server.UserName,server.HostName,server.Port,remoteFilePath,null,null).toString(), createDefaultOptions(server.Key));
FileContent content = remoteFileObject.getContent();
return content.getLastModifiedTime();
The SSH key is in the format -----BEGIN RSA PRIVATE KEY----- etc.; as exported by puttyGen under Conversions -> Export OpenSSH Key (i.e. the old format of OpenSSH key, not the new one).
I have tested this code on Windows, with a locally hosted SFTP server (i.e. also on the same Windows machine), and it works successfully.
I am now wanting to use this in a Linux environment (RHEL), connecting to an AWS Transfer SFTP server, secured using SSH keys as described.
I can connect successfully using the SFTP command from the Linux OS shell:
sftp -oIdentityFile=/path/to/test.ppk USER#xxx.xxx.xxx.xxx
But, when I try to run the java code, the code hangs on the call to manager.resolveFile.
After half an hour (I think - this might not be related), I get the following in /var/log/messages:
systemd-logind[1297]: Session 115360 logged out. Waiting for processes to exit.
systemd[1]: session-115360.scope: Succeeded.
systemd-logind[1297]: Removed session 115360.
I don't have SELinux enabled, so I don't think that's interfering in any way.
Can anyone help suggest what might be causing this?
There were a couple of things, as it turns out:
Timeout
The timeout can be set when you configure the SftpFileSystemConfigBuilder, by using the .setSessionTimeout(FileSystemOptions, Duration) method call. This reduces the timeout which, if nothing else, makes the issue easier to debug.
The Session comments in the messages log were not related to the issue. Instead, the issue happened because the exec channel is disabled on the SFTP server, but VFS is trying to use it. At a simple level, this can be disabled using setDisableDetectExecChannel on the SftpFileSystemConfigBuilder object - but you should know the implications of this before doing so.
I am trying to use Geode Redis Adapter as my server for Rate Limiting provided by Spring Cloud Gateway. If I use a real Redis Server, everything works perfectly, but with Geode Redis Adapter doesn't.
I am not too sure if this functionality is supported.
I tried to start a [Geode image] (https://hub.docker.com/r/apachegeode/geode/) exposing the default Redis port 6739. Starting the container, I executed using gfsh the following commands:
start server --name=redis --redis-port=6379 --J=-Dgemfireredis.regiontype=PARTITION_PERSISTENT
When I try to access in my local machine by redis-cli -h localhost -p 6379 I can get connected.
My implementation is simple:
application.yaml
- id: rate-limitter
predicates:
- Path=${GUI_CONTEXT_PATH:/rate-limit}
- Host=${APP_HOST:localhost:8080}
filters:
- name: RequestRateLimiter
args:
key-resolver: "#{#remoteAddrKeyResolve}"
redis-rate-limiter:
replenishRate: ${rate.limit.replenishRate:1}
burstCapacity: ${rate.limit.burstCapacity:2}
uri: ${APP_HOST:localhost:8080}
Application.java
#Bean
KeyResolver remoteAddrKeyResolve() {
return exchange -> Mono.just(exchange.getSession().subscribe().toString());
}
When my application is started and I try to access /rate-limit, I expected to connect to redis and my page be displayed.
However, my Spring application keeps trying to access and can't i.l.c.p.ReconnectionHandler: Reconnected to localhost:6379. So, the page is not displayed and keep loading. FIXED in Edit1 below
Problem is I am using RedisRateLimiter and tried to simulate the access with a for loop. Checking the RedisRateLimiter.REMAINING_HEADER, the value is -1 always. Doesn't seems right, because I don't have this issue in Redis itself.
During the start of the application, I also receive these messages on connection to Geode Redis Adapter:
Starting without optional epoll library
Starting without optional kqueue library
Is anything missing in my Geode Redis Adapter or anything else in Spring?
Thank you
Edit 1: I was missing to start the locator and region, that's why I wasn't able to connect.
start locator --name=locator
start server --name=redis --redis-port=6379 --J=-Dgemfireredis.regiontype=PARTITION_PERSISTENT
create region --name=redis-region --type=REPLICATE_PERSISTENT
I'm using Docker Selenium images to run browser nodes, repo is available here https://github.com/SeleniumHQ/docker-selenium. There is no documentation on how config.json can be used to provide proxy values.
I'm using Selenium version 2.44.0.
In my infrastructure, there are certain assets that are sourced from a location which needs proxy configuration on browser to access them. I'm trying to setup proxy on a chrome node. According to this documentation here, proxy can be set like following:
java -jar selenium-2.44.0.jar -Dhttp.proxyHost=192.168.2.10 -Dhttp.proxyPort=80
My proxy does not require, usename and password hence I have ignored those values.
What is not clearly mentioned on SeleniumHQ documentation is, whether it needs proxy configuration on both hub or nodes or just the nodes. I've tried different combinations but haven't worked for me.
Actual commands i'm running are:
For Hub:
java -jar /opt/selenium/selenium-server-standalone.jar -role hub -Dhttp.proxyHost=192.168.2.10 -Dhttp.proxyPort=80 -hubConfig /opt/selenium/hubconfig.json
When I run command above, I can see -D* values being displayed on console config.
For node:
xvfb-run --server-args=":99.0 -screen 0 1360x1020x24 -ac +extension RANDR" java -jar /opt/selenium/selenium-server-standalone.jar -Dhttp.proxyHost=192.168.2.10 -Dhttp.proxyPort=80 -role node -hub http://$HUB_PORT_4444_TCP_ADDR:$HUB_PORT_4444_TCP_PORT/grid/register -nodeConfig /opt/selenium/config.json
When I run this command I can see the proxy values on console again but I the assets are not loaded by the browser.
Also, on a side note it seems like this can be done on developers side (in java code) but I'm keen to solve it on my (operations) side.
Thanks - here is what we got:
First you need a way to verify your settings made it into the browser.
chrome://net-internals/proxyservice.config#proxy
The actual command line instruction is:
/chromeexec --proxy="http=http://proxyserver:port/;https=http://proxyserver:port/"
Note that the colons will blow up on the bash command line if you don't use double-quotes.
Now if you're sending this from the Webdriver Java code programmatically - you'll need to escape out the double quotes - so the proxy server setting in Java may look like:
org.openqa.selenium.Proxy proxy = new org.openqa.selenium.Proxy();
proxy.setHttpProxy("\"http://proxyserver:port/\"")
Alternatively you can pass this in as an execution parameter.
DesiredCapabilities capabilities = DesiredCapabilities.chrome();
capabilities.setCapability("chrome.switches", Arrays.asList("--proxy \"http=http://proxyserver:port/;https=http://proxyserver:port/\""));
WebDriver driver = new ChromeDriver(capabilities);
Now your origin question was about accessing external resources with the proxy. What we did (similar to your question) was to pass a proxy exception for the site we were hitting so the external resources would go via the proxy.
So then you add an exception for your primary website - assuming the resource is 10.1.10.5 then it looks like:
--proxy-bypass-list=10.1.10.5
Which then we do in code as:
capabilities.setCapability("chrome.switches", Arrays.asList("--proxy=\"http=http://proxyserver:port/;https=http://proxyserver:port/\"" "--proxy-bypass-list=10.1.10.5"));
Note that setting username and password is a bug in Chrome. (Please star it if this holds you up. )
If you need a username and password, then the solution is a PAC file.
The syntax is:
--proxy-pac-url=file:///proxy.pac
The file format looks like:
if (host == "mylocalserver.com")
{
return 'DIRECT';
} else {
return return "PROXY wcg2.example.com:8080 ";
}
For the case of usernames and passwords in proxy settings, note the following:
Proxy auto-configuration files do not support hard-coded usernames and passwords. There's good reasoning behind this too, since providing support for hard-coded credentials would open up significant security holes, as anybody would be able to easily view the required credentials to access the proxy.
Rather configure the proxy as a transparent proxy, that way you won't need a username and password. You mention in one of your comments that the proxy server is located outside your LAN, which is why you require authentication. However, most proxies support rules based on the source IP, in which case it's a simple matter of only allowing requests originating from your corporate network.
The original proxy auto-config specification was originally drafted by Netscape in 1996. The original specification is no longer available directly, but you can still access it using The Wayback Machine's archived copy. The specification hasn't changed much, and is still largely the same as it was originally. You'll see the specification is quite simple, and that there is no provision for hard-coded credentials.
To solve this problem - you can use this tool:
https://github.com/sjitech/proxy-login-automator
This tool can create a local proxy and automatically inject user/password to real proxy server. Support PAC script.
I'm new to Fabric, so this might have a simple answer I've missed due to bad search terminology.
I'm trying to start a new ubuntu EC2 instance in AWS, then connect to it with Fabric and have it execute several commands. However, it seems there is a problem with Fabric's SSH connection, maybe I'm defining some env variable wrong?
#task //starts new EC2 instance and sets env variables
def prep_deploy():
//code to start new EC2 instance, named buildhost
env.hosts=[buildhost.public_dns_name]
env.user = "ubuntu"
env.key_filename = ".../keypair.pem"
env.port = 22
#task
def deploy():
run("echo $HOME") //code fails here
....
I run fab prep_deploy deploy, since I read you need a new task for the new env variables to take effect.
I get
Fatal error: Timed out trying to connect to ...amazonaws.com (tried 1 time)
Underlying exception: timed out
The security groups for the instance are open to SSH: I can connect through Putty. In fact, if I empty the `env.host_string' variable at the start of deploy(), when it prompts me to manually input a host, I can write in "ubuntu#...amazonaws.com:22", with the host name exactly as seen from output at the task start, and it will connect to the instance. But I can't figure how to manipulate the environment variables so that it understands the host name.
It looks like your fabric settings are correct with the use of variables. I was able to use the code you provided to connect to my Ubuntu VM. I am wondering if you are having a connection issue due to the amazon Instance not being fully booted and ready for connections when your script runs the second task. I have run into that issue on a different VM hosts.
I added the following code to check and try the connection again. This might help you
import socket
import time
def waitforssh():
s=socket.socket()
address=env.host_string
port=22
while True:
time.sleep(5)
try:
s.connect((address,port))
return
except Exception,e:
print "failed to connec to %s:%s %(address,port)
pass
insert the function call into your deploy task
def deploy():
waitforssh()
This should test the connection. If the port does not respond, it will wait 5 seconds and try again.
That could explain why your second attempt to connect works.
I'm searching for a way to get my Files synchronized (task) from a web server (Ubuntu 14) to a local server (Windows Server). The web server creates small files, which the local Server needs. The web server is in a DMZ, accessible through SSH. Only the local server is able to access folders on web server. It tried using Programs like WinSCP, but I'm not able to set a "get"-Job.
Is there a way to do this with SSH on Windows server without login every few seconds? Or is there a better solution? In the Future Web-Services are possible, but at the moment I need a quick solution.
Either you need to schedule a regular frequent job, that connects and downloads changes.
Or you need to have continuously running process, that keeps the connection opened and regularly watches for changes.
There's hardly a better solution (that's still quick and easy to implement).
Example of continuous process implemented using WinSCP .NET assembly:
// Setup session options
SessionOptions sessionOptions = new SessionOptions {
Protocol = Protocol.Sftp,
HostName = "example.com",
UserName = "user",
Password = "mypassword",
SshHostKeyFingerprint = "ssh-rsa 2048 xxxxxxxxxxx...="
};
using (Session session = new Session())
{
// Connect
session.Open(sessionOptions);
while (true)
{
// Download changes
session.SynchronizeDirectories(
SynchronizationMode.Local, localPath, remotePath, false).Check();
// Wait 10 seconds
Thread.Sleep(10000);
}
}
You will need to add a better error handling and reconnect, if connection breaks.
If you do not want to implement this as (C#) application, you can use PowerShell script. For a complete solution, see
Keep local directory up to date (download changed files from remote SFTP/FTP server).