I've run into a problem where I create a new connection each request and it is terribly inefficient.
I would like to allow a set maximum number of TLS connections to stay open/cached on my client at once. When data is ready to be transmitted it first checks if there is an idle connection, then it checks if it can create a new connection (i.e. number of open connections < maximum allowed). If both are false then it has to wait until either a connection becomes idle or the number of open connections decreases. Connections should also be killed after they have been idle for a set amount of time.
Here is some (bad) pseudocode of what I have in mind. Could I have some suggestions?
func newTLSConnection(netDialer, host, tlsConfig) (tls.Con) {
// Set up the certs
// ...
// Make A TLS Connection
con, _ := tls.DialWithDialer(netDialer, "tcp", host, tlsConfig)
return con
}
func (con tls.Con) Do (someData []byte) {
// If con
// Send some date to the server
_, _ := con.Write(someData)
// Get response from server
response := make([]byte, 100)
_, _ := io.ReadFull(con, response)
return
}
main(){
var cons []tls.Con
maxConSize := 3
while {
if allConsInSliceAreBusy() && len(cons) < maxConSize{
newCon = NewTLSConnection(...)
cons = append(cons, newCon)
conToUse := newCon
conToUse.Do([]byte("stuff"))
} else if !allConsInSliceAreBusy() {
conToUse := firstOpenConInCons()
conToUse.Do([]byte("stuff"))
} else{
// NOP. Max cons created and they are all busy.
// Wait for one to become idle or close.
}
}
}
Thank you!
What you ask about is called connection pooling. Have a look at Fasthttp package source code: https://github.com/valyala/fasthttp/blob/master/client.go. You can even use this library or another one for your purpose.
You can find the acquireConn func which does exactly what you need:
Locks connection pool to allow concurrent execution.
Creates new connections if pool is empty (new connections are pulled out of the pool).
Makes connections cleanup in case of TTL timeout.
Related
I am a beginner and am currently playing with the pubsub example from libp2p given here https://github.com/libp2p/go-libp2p/tree/master/examples/pubsub/basic-chat-with-rendezvous
I have been able to build the code and run the binary in different terminals and it works.
I am trying to automate this process from the main.go program itself where I can create a few threads to spin up new agents where they
publish messages to the network and the rest of the peers subscribe to it.
I have provided the modified code I have built currently but it doesnt seem to work. The peers cannot discover each other.
func main() {
help := flag.Bool("help", false, "Display Help")
cfg := parseFlags()
if *help {
fmt.Printf("Simple example for peer discovery using mDNS. mDNS is great when you have multiple peers in local LAN.")
fmt.Printf("Usage: \n Run './chat-with-mdns'\nor Run './chat-with-mdns -host [host] -port [port] -rendezvous [string] -pid [proto ID]'\n")
os.Exit(0)
}
fmt.Printf("[*] Listening on: %s with port: %d\n", cfg.listenHost, cfg.listenPort)
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
// Spawn a thread for each iteration in the loop.
// Pass 'i' into the goroutine's function
// in order to make sure each goroutine
// uses a different value for 'i'.
wg.Add(5)
go func(i int) {
// At the end of the goroutine, tell the WaitGroup
// that another thread has completed.
defer wg.Done()
ctx := context.Background()
r := rand.Reader
// Creates a new RSA key pair for this host.
prvKey, _, err := crypto.GenerateKeyPairWithReader(crypto.RSA, 2048, r)
if err != nil {
panic(err)
}
// 0.0.0.0 will listen on any interface device.
sourceMultiAddr, _ := multiaddr.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", cfg.listenHost, cfg.listenPort))
// libp2p.New constructs a new libp2p Host.
// Other options can be added here.
host, err := libp2p.New(
libp2p.ListenAddrs(sourceMultiAddr),
libp2p.Identity(prvKey),
)
if err != nil {
panic(err)
}
// Set a function as stream handler.
// This function is called when a peer initiates a connection and starts a stream with this peer.
host.SetStreamHandler(protocol.ID(cfg.ProtocolID), handleStream)
fmt.Printf("\n[*] Your Multiaddress Is: /ip4/%s/tcp/%v/p2p/%s\n", cfg.listenHost, cfg.listenPort, host.ID().Pretty())
peerChan := initMDNS(host, cfg.RendezvousString)
for { // allows multiple peers to join
peer := <-peerChan // will block untill we discover a peer // the code currently hangs here
fmt.Println("Found peer:", peer, ", connecting")
if err := host.Connect(ctx, peer); err != nil {
fmt.Println("Connection failed:", err)
continue
}
//** this part of the code is experimental and is not accessed by any thread yet **//
stream, err := host.NewStream(ctx, peer.ID, protocol.ID(cfg.ProtocolID))
if err != nil {
fmt.Println("Stream open failed", err)
} else {
rw := bufio.NewReadWriter(bufio.NewReader(stream), bufio.NewWriter(stream))
go writeData(rw)
go readData(rw)
fmt.Println("Connected to:", peer)
}
//** this part of the code is experimental and is not accessed by any thread yet **//
}
}(i)
}
fmt.Println("exit")
wg.Wait()
fmt.Println("Finished for loop")
}
But this doesn't seem to work. Are there any examples I can look at currently for solving this error.
I am building a performance oriented REST-API.
The skeleton was built with go-swagger.
The API has 3ms to answer and is succeeding in single-use while only needing between 0.5ms - 0.8ms for the response. There are two calls made to redis.
This is how the pool is initiated:
func createPool(server string) *redis.Pool {
return &redis.Pool{
MaxIdle: 500,
MaxActive: 10000,
IdleTimeout: 5 * time.Second,
//MaxConnLifetime: 1800 * time.Microsecond,
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", server)
if err != nil {
return nil, err
}
return c, err
},
TestOnBorrow: func(c redis.Conn, t time.Time) error {
if time.Since(t) < (3 * time.Second) {
return nil
}
_, err := c.Do("PING")
if err != nil {
}
return err
},
}
And this is the only place where the pool is used:
func GetValue(params Params) []int64 {
timeNow := time.Now()
conn := data.Pool.Get()
value1 := Foo(conn)
value2 := Bar(value1 , conn)
conn.Close()
defer Log(value1, value2)
return value2}
So basically at the start I get a connection from the pool, use is for the two redis-requests and then close it. I previously used defer conn.Close() as it is stated in the documentation and it didn't work either. vm.overcommit_memory=1 and net.core.somaxconn=512 were set on the server.
In single use of the API there is no problem.
When under stress, like 4000 requests per second, it works for the first like 10s and then gets very slow and doesn't manage to answer in time (the 3ms stated at start).
When I check ActiveCount and IdleCount the values are between 2 and 5 and are always the same. Should`t there be a higher amount of connections possible with a MaxActive-value of 10.000? Or am I missing some crucial settings?
The whole problem was not redis-dependent. The sockets of the port listening were flooded, since the TCP-connections weren't closed properly while stress-testing.
That resulted in around 60k connections in time_wait-state. The problem was resolved when using live-traffic for the stress-test instead of jMeter.
I am trying to establish a TLS connection by providing a tls.Config struct containing a Rand field that should always return the same int when calling their Read method, cf. the docs here: https://golang.org/pkg/crypto/tls/#Config
I've written this builder:
func newZeroRand() *rand.Rand {
return rand.New(rand.NewSource(0))
}
And a test to make sure that rand.Rand is always returning the same int when Read is invkoed multiple times, notice the different input params "foo" and "bar" providing the same output:
func TestPredictableZeroRandGenerator(t *testing.T) {
zeroRand := newZeroRand()
firstNum, err := zeroRand.Read([]byte("foo"))
if err != nil {
t.Error(err)
}
secondNum, err := zeroRand.Read([]byte("bar"))
if err != nil {
t.Error(err)
}
// fmt.Printf("firstNum %d secondNum %d \n", firstNum, secondNum)
if firstNum != secondNum {
t.Error(fmt.Sprintf("This is not a predictable zero random generator! The first number is: %d the second number is: %d", firstNum, secondNum))
}
}
Using that previously defined newZeroRand() I was expecting to generate always the same SSL key inside the file same-key.log when providing the TLS configuration like this:
func tlsConfig() (*tls.Config, error) {
w, err := os.OpenFile("same-key.log", os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0755)
if err != nil {
return nil, err
}
tlsConfig := tls.Config{
Rand: newZeroRand(),
KeyLogWriter: w,
}
return &tlsConfig, nil
}
But for multiple executions I get different file contents. I may be misunderstanding the details here: https://golang.org/pkg/crypto/tls/#example_Config_keyLogWriter because when I open those files same-key.log after each execution
then I find the structure described here in the NSS Key Log Format from Mozilla: https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format
CLIENT_RANDOM <FIRST_LONG_ID> <SECOND_LONG_ID>
where:
<FIRST_LONG_ID> is always the same
<SECOND_LONG_ID> is changing after each execution <-- why this is happening?
When providing a key file same-key.log for a batch of packets from a different execution, then Wireshark is not able to decrpyt them!
I may be misunderstanding some internals about SSL cryptography here and I was wondering if I should also provide the certificates in the configuration struct? How could I generate these certificates?
Afaik when using certificates on top of the key there should be a piece of information coming from the other side at runtime, so then I can not decrypt the stream of packets if I don't have that information. This is why I thought certificates are not needed if I want to use Wireshark to decrypt those packets.
Otherwise I am not sure how could I force the TCP connection to always encrypt/decrypt the packets with the same key?
Edit:
As pointed out by #peter in his answer I was asserting on the length of the input byte slice rather than the actual "deterministic-random value".
I came out with this Read implementation for the tls.Config struct:
type debugRand struct {}
func (dr *debugRand) Read(p []byte) (n int, err error) {
for i := range p {
p[i] = byte(0)
}
return len(p), nil
}
func newZeroRand() *debugRand {
return &debugRand{}
}
This is setting to 0 all the elements of the input slice.
However I still generate SSL keys which have different <SECOND_LONG_ID> values for different executions.
Is it possible at all to get Wireshark re-using SSL keys on different TLS connections when analysing those encrypted TCP packets?
You are using Read wrong. Your test doesn't test what you think it does. firstNum and secondNum are both 3 because you are reading three random bytes; because you are passing byte slices of length three. You never check the actual random bytes though.
I'm trying to share a unnamed mach semaphore between two processes.
I can create one and wait on it in the same process.
semaphore_t semaphore = 0;
mach_error_t err = semaphore_create(mach_task_self(), &semaphore, SYNC_POLICY_FIFO, 0);
...
semaphore_wait(semaphore);
But I want to send it to another process (of which I only have the mach_port_t) and then let it semaphore_signal my own process.
I already tried things like:
mach_port_allocate(target, MACH_PORT_RIGHT_RECEIVE, targetSemaphore)
mach_port_insert_right(target, targetSemaphore, semaphore, MACH_MSG_TYPE_COPY_SEND)
Which will yield an error because the port name already exists in the target process or a "unknown failure" if I don't allocate it in the target process.
And even:
mach_msg_send
mach_msg_receive
But I can't even get a port right form one process to anther to send anything.
What am I doing wrong and is it even possible?
I figured it out:
mach_port_extract_right
is correct way, instead of:
mach_port_insert_right
Then doing this, will do the job:
semaphore_t semaphore = 0;
mach_error_t err = semaphore_create(mach_task_self(), &semaphore, SYNC_POLICY_FIFO, 0);
err = mach_port_allocate(target, MACH_PORT_RIGHT_RECEIVE, &receivePort);
mach_msg_type_name_t type;
semaphore_t sendPort = 0;
err = mach_port_extract_right(target, receivePort, MACH_MSG_TYPE_MAKE_SEND, &sendPort, &type);
//Send semaphore using port
mach_msg_send(&msg.header);
We are trying to test locks. Basically, there are multiple clients trying to obtain a lock on a particular key. In the example below, we used the key "x".
I don't know how to test whether the locking is working. I can only read the logs to determine whether it is working.
The correct sequence of events should be:
client1 obtains lock on key "x"
client2 tries to obtain lock on key "x" (fmt.Println("2 getting lock")) - but is blocked and waits
client1 releases lock on key "x"
client2 obtains lock on key "x"
Q1: How could I automate the process and turn this into a test?
Q2: What are some of the tips to testing concurrency / mutex locking in general?
func TestLockUnlock(t *testing.T) {
client1, err := NewClient()
if err != nil {
t.Error("Unexpected new client error: ", err)
}
fmt.Println("1 getting lock")
id1, err := client1.Lock("x", 10*time.Second)
if err != nil {
t.Error("Unexpected lock error: ", err)
}
fmt.Println("1 got lock")
go func() {
client2, err := NewClient()
if err != nil {
t.Error("Unexpected new client error: ", err)
}
fmt.Println("2 getting lock")
id2, err := client2.Lock("x", 10*time.Second)
if err != nil {
t.Error("Unexpected lock error: ", err)
}
fmt.Println("2 got lock")
fmt.Println("2 releasing lock")
err = client2.Unlock("x", id2)
if err != nil {
t.Error("Unexpected Unlock error: ", err)
}
fmt.Println("2 released lock")
err = client2.Close()
if err != nil {
t.Error("Unexpected connection close error: ", err)
}
}()
fmt.Println("sleeping")
time.Sleep(2 * time.Second)
fmt.Println("finished sleeping")
fmt.Println("1 releasing lock")
err = client1.Unlock("x", id1)
if err != nil {
t.Error("Unexpected Unlock error: ", err)
}
fmt.Println("1 released lock")
err = client1.Close()
if err != nil {
t.Error("Unexpected connection close error: ", err)
}
time.Sleep(5 * time.Second)
}
func NewClient() *Client {
....
}
func (c *Client) Lock(lockKey string, timeout time.Duration) (lockId int64, err error){
....
}
func (c *Client) Unlock(lockKey string) err error {
....
}
Concurrency testing of lock-based code is hard, to the extent that provable-correct solutions are difficult to come by. Ad-hoc manual testing via print statements is not ideal.
There are four dynamic concurrency problems that are essentially untestable (more). Along with the testing of performance, a statistical approach is the best you can achieve via test code (e.g. establishing that the 90 percentile performance is better than 10ms or that deadlock is less than 1% likely).
This is one of the reasons that the Communicating Sequential Process (CSP) approach provided by Go is better to use than locks on share memory. Consider that your Goroutine under test provides a unit with specified behaviour. This can be tested against other Goroutines that provide the necessary test inputs via channels and monitor result outputs via channels.
With CSP, using Goroutines without any shared memory (and without any inadvertently shared memory via pointers) will guarantee that race conditions don't occur in any data accesses. Using certain proven design patterns (e.g. by Welch, Justo and WIllcock) can establish that there won't be deadlock between Goroutines. It then remains to establish that the functional behaviour is correct, for which the Goroutine test-harness mentioned above will do nicely.