Artifactory replication after connection reset - replication

Does artifactory cache partial replicated files on remote repositories and then resume download after the connection is established again? Or does is start the download replication all over after a connection reset?

Your question is not 100% clear. Are you asking if the replication from remote Artifactory instance will continue from where it stopped in case of connection reset or do you mean that you started to resolve an artifact from a remote repository and the connection was stopped?
Replication will basically start a file list from the remote Artifactory instance and will replicate any missing item, meaning that you have 10 files on a remote instance and the connection interrupted after 5 files have already been replicated, Artifactory replication will only replicate the remaining 5 files and not the whole 10.
In case it's the second case than in case a remote download was interrupted before the file was downloaded completely, than you will need to download the entire file again.
Does that answer your question?

Related

How to properly restart a kafka s3 sink connect?

I started a kafka s3 sink connector (bundle connector from confluent package) since 1 May. It works fine until 8 May. Checking the status, it tells that some aws exception crashes this connector. This should not be a big problem, so I want to restore it.
I tried the following steps:
I POST /connectors/s3sink/restart . Then I saw the connector is in RUNNING mode, but the task is still FAIL.
Then I PUT /connectors/s3sink/task/0/restart. Ok, now the task is in RUNNING mode.
But then I tail the log, I found it starts to rewrite the old data, such as 3 May data. And it messed the old data!
So, does connect restart REST API reset the offset? I thought it will save the offset and just start from the offset it fails.
And how to restart a failed connector task correctly? By deleting those PODs? (using kubernetes), or by REST /task/0/restart? When should I use /connectors/s3sink/restart?
/connector/:name/restart is a rolling restart operation on the worker leader that needs to propagate to all worker server tasks in async fashion. So, you need to ensure network connection between the leader worker and all others.
/connector/:name/task/:num/restart will send request straight to that worker, restarting the thread.
Restart should not reset the offset since they are stored in the consumer offsets topic for that connect cluster. If anything, the tasks were not able to commit offsets back to the __consumer_offsets topic, but you should see logs for that.

jmeter distribution load testing for no result return from slave to master gui mode

I am not getting any result from slave machine also no entry in DB
1. connectivity between master and slave is established since when i run remote from master, slave states 'Start the test' 'Finish the test'
2. also from single master and slave script is executed successfully
3. also since there is dynamic ip to server so i am not able to provide ip and port
Not able to catch what exactly is the problem, if you can check through Team viewer, please guide me further as you get some time
slave machine
master machine
Make sure both master and slave are running the same Java version
Make sure both master and slave are running the same JMeter version (also consider upgrading to latest JMeter version - JMeter 3.3 as of now)
If you use any JMeter Plugins or there are any libraries in JMeter Classpath make sure to copy them over to slave machines as well.
Check jmeter-server-log on slave side
If you want to see request and response details in GUI add the next line to user.properties file for all slaves:
mode=Standard
Check out How to Load Test Opening a URL on a Remote Machine with JMeter article for comprehensive explanation and step-by-step instructions.

Connect to existing SFTP server instead of starting new SFTP subprocess

I'm thinking about writing a new SFTP server. The current SFTP servers are started for every session. If there are three users of SFTP, there are three SFTP servers. That's not what I want. I want one server where every new SFTP session is connected to. How to do this?
When you login the server to start an SFTP session, an SSH process is started and am SFTP subsystem is started as well. The SSH process takes care of the encryption etc. The io is done through the standard ports 0, 1 and 2 (stdin, stdout and stderr) of the SFTP process.
This all works when for every session there is a dedicated SFTP process. But how can I make it to work when there is one SFTP server I want to connect to. Via a "ssh-to-sftp-connect-agent"?
More information:
I want to use sftp version 6, which is better than version 3, which is used by openssh. The openssh community does not want to upgrade their sftp implementations:
https://bugzilla.mindrot.org/show_bug.cgi?id=1953
A very good open source sftp server is at:
http://www.greenend.org.uk/rjk/sftpserver/
and very usefull overview:
http://www.greenend.org.uk/rjk/sftp/sftpversions.html
This server us using sftp protocol version 6, but has (b)locking and handling of acl's not implemented. To implement these shared tables are necessary for all open files with their access flags and blocking mode by who for (b)locking to work. When every sftp session leads to another process with:
Subsystem sftp /usr/libexec/gesftpserver
(which is inevitable when you want to use any protocol higher than 3)
then a shared database is a sollution to handle locks and acl's.
Another sollution is that every new sftp session connects to one existing "super" sftp server, which is started at boot time. Simultaneous access, locking etc. is much easier to program.
Howto do this with this line:
Subsystem sftp /usr/libexec/exampleconnectagent
In the ideal case the agent enables the connection between the dedicated ssh process for the connection and the sftp-server, and terminates.
Long story, is this possible? Do I have to use the passing of fd's described here:
Can I share a file descriptor to another process on linux or are they local to the process?
Thanks in advance.
addition:
I'm working on a sftp file server listning to a server socket. clients can connect using the direct-streamlocal functionality to connect a channel to it in openssh. THis way I can have one server process for all clients and this is what I wanted in the first place.
The current SFTP servers are started for every session.
What do you mean by "current SFTP servers"? Which one specifically?
The OpenSSH (as the most widely used SSH/SFTP server), did indeed open a new subprocess for each SFTP session. And there's hardly any problem with that. Though the recent versions don't, anymore. With the (new) default configuration, an in-process SFTP server (aka internal-sftp) is used.
See OpenSSH: Difference between internal-sftp and sftp-server.
If you really want to get an answer to your question, you have to tell us, what SFTP/SSH server your question is about.
If it is indeed about OpenSSH:
Nothing needs to be done, the functionality is there already.
If you want to add your own implementation, you have to modify OpenSSH code, there's no way to plug it in. Just check how the internal-sftp is implemented.
The other way is using the agent architecture, as you have suggested yourself. If you want to take this approach and need some help, you should ask more specific question about inter-process communication and/or sharing file descriptors, and not about SFTP.

ActiveMQ takes a long time to failover

I have 3 ActiveMQ brokers in a networked Shared File System(GlusterFS)/Master Slave configuration - all in VMs.
If the master fails the client should failover to the new master.
The issue I have is that the connection to the new master takes about 50 seconds.
Is that reasonable?
How to improve it?
My client connection looks like this
failover:(tcp://a1:61616?connectionTimeout=1000,tcp://a2:61616?connectionTimeout=1000,tcp://a3:61616?connectionTimeout=1000)?randomize=false&maxReconnectDelay=10000&backup=true"
Also when disconnecting the master by disconnecting network cable it stops and throws an exception regarding the kahaDB (which is on GlusterFS) and needs to be restarted.
Is there a workaround for this behavior so the master broker auto-restarts or is able to connect automatically once the network comes back?
The failover depends on the time the underlying file system take for releasing the file lock.
In your case, the NFS cluster is waiting 50s to detect that the first node is lost and so release the lock on the kahadb file, wich can then be taken by the seconde node.
You can customize this delay with the NFSD_V4_GRACE and NFSD_V4_LEASE parameters in the NFS server configuration file (/etc/sysconfig/nfs on redhat/centos systems).
You can also customize the kahadb lockKeepAlivePeriod, see http://activemq.apache.org/pluggable-storage-lockers.html

How to fix the Zookeeper error for Hbase

Main OS is windows 7 64bit. Using VM player to create two vm CentOS 5.6 system. The net connection is bridge. I installed Hbase on both of the CentOS system, one is master, the other is slave. When I enter the shell, and run status 'details'.
The error from master is
zookeeper.ZKConfig: no valid quorum servers found in zoo.cfg ERROR:
org.apache.hadoop.hbase.ZooKeeperConnectionException: An error is
preventing HBase from connecting to ZooKeeper
And the error from slave is
ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is
able to connect to ZooKeeper but the connection closes immediately.
This could be a sign that the server has too many connections (30 is
the default). Consider inspecting your ZK server logs for that error
and then make sure you are reusing HBaseConfiguration as often as you
can. See HTable's javadoc for more information.
Please give me some suggestion.
Thanks a lot
Check if this is within your .bashrc, if not, add them and restart all hbase services (do not forget to manually run them as well), that did it for me with a pseudo-distributed installation. My problem (and maybe yours as well) was that Hbase wasn't detecting it's configuration.
export HADOOP_CONF_DIR=/etc/hadoop/conf
export HBASE_CONF_DIR=/etc/hbase/conf
I see this very often on my machine. I don't have a failsafe cure, but end up running stop-all.sh, and deleting every place that hadoop and dfs (its a dfs failure) store their temp files. It seems to happen after my computer goes to sleep while dfs is running.
I am going to experiment with single-user mode to avoid this. I dont need distribution while developing.