Using the following code to load data in Aerospike. data is a list of maps of type BinMap
for _, binMap := range data {
id, ok := binMap["id"].(string)
key, _ := as.NewKey("test", "myset", id)
err := shared.Client.Put(nil, key, binMap)
if err !=nil {
fmt.Println(err)
}
After loading few records, the following error message is received.
command execution timed out on client: Exceeded number of retries.
See `Policy.MaxRetries`. (last error: Node not found for partition
test:711 in partition table.)
For each iteration, the partition test number changes.
The error continues even after waiting for 5 seconds after each Put command. I am not sure what timeout is reported in the error message.What client configuration is required for go client?
Using MacOs 10.15.3; go client; Aerospike running on docker 2.2.0.3
There's a good chance your cluster hasn't formed correctly, or that its networking isn't properly set up to give clients access to all the nodes. Since you're using Docker, take a look at Lucien's Medium post How do I get a 2 nodes Aerospike cluster running quickly in Docker without editing a single file?.
Related
I have created a RedisClient using go-redis
rdClient := rd.NewClusterClient(rdClusterOpts)
I can do other database operation using the client
out,err := rdClient.Ping(context.TODO()).Result()
PONG
I can also do get set operation using the client.
When I try to rebalance the slots, it shows an error.
out, err := rdClient.Do(context.TODO(), "--cluster", "rebalance", "10.244.0.98", "--cluster-use-empty-masters").Result()
It shows the Error
ERR unknown command '--cluster', with args beginning with: 'rebalance' '10.244.0.96:6379' '--cluster-use-empty-masters
Is there any way to perform the Redis Cluster Manager commands using go-redis or any other go redis client ?
We have a Scala Spark application, that reads something like 70K records from the DB to a data frame, each record has 2 fields.
After reading the data from the DB, we make minor mapping and load this as a broadcast for later usage.
Now, in local environment, there is an exception, timeout from the RetryingBlockFetcher while running the following code:
dataframe.select("id", "mapping_id")
.rdd.map(row => row.getString(0) -> row.getLong(1))
.collectAsMap().toMap
The exception is:
2022-06-06 10:08:13.077 task-result-getter-2 ERROR
org.apache.spark.network.shuffle.RetryingBlockFetcher Exception while
beginning fetch of 1 outstanding blocks
java.io.IOException: Failed to connect to /1.1.1.1:62788
at
org.apache.spark.network.client.
TransportClientFactory.createClient(Transpor .tClientFactory.java:253)
at
org.apache.spark.network.client.
TransportClientFactory.createClient(TransportClientFactory.java:195)
at
org.apache.spark.network.netty.
NettyBlockTransferService$$anon$2.
createAndStart(NettyBlockTransferService.scala:122)
In the local environment, I simply create the spark session with local "spark.master"
When I limit the max of records to 20K, it works well.
Can you please help? maybe I need to configure something in my local environment in order that the original code will work properly?
Update:
I tried to change a lot of Spark-related configurations in my local environment, both memory, a number of executors, timeout-related settings, and more, but nothing helped! I just got the timeout after more time...
I realized that the data frame that I'm reading from the DB has 1 partition of 62K records, while trying to repartition with 2 or more partitions the process worked correctly and I managed to map and collect as needed.
Any idea why this solves the issue? Is there a configuration in the spark that can solve this instead of repartition?
Thanks!
I am trying to fetch data from blockchain using query in chaincode. I have invoked around 2,50,000 records in blockchain and trying to fetch the data using query. When i run the chaincode and get the peer logs, I am getting the below error.
failed to invoke chaincode name:"scbcch" , error: timeout expired while executing transaction
When i do a query for lesser data my code works fine without these errors.
Can anybody please help me in solving the issue please.
I am using Hyperledger Fabric 1.4.
Here is my Query code:
queryString := fmt.Sprintf("{\"selector\":{\"_id\": {\"$gt\": null},\"$and\":[{\"terminationReportID\":{\"$ne\":\"%s\"}},{\"terminationReportFlag\":{\"$eq\":\"%s\"}},{\"effectiveDateOfAction\":{\"$gt\":\"%s\"}},{\"importDate\":{\"$eq\":\"%s\"}}]},\"fields\": [\"bankID\",\"effectiveDateOfAction\",\"costCentre\"],\"use_index\":[\"_design/indexTerminationReportDoc\",\"indexTerminationReportName\"]}","null", "Yes", "2018-10-31", lastImportDatekey)
queryResultss11, errtr := getQueryResultForQueryString(stub, queryString)
And my Indexing is:
{"index":{"fields":["terminationReportID","terminationReportFlag","effectiveDateOfAction","importDate"]},"ddoc":"indexTerminationReportDoc", "name":"indexTerminationReportName","type":"json"}
Can anyone please help me to figure out and resolve the issue. I am stuck with this for more than 3 days.
Is there anything I have to change on my index part. I am re-posting the same issue as I am not getting any support for this issue.
The issue is with the chaincode execution timeout.You can customize it in the docker file of your peers.
CORE_CHAINCODE_EXECUTETIMEOUT=80s
I followed the example given in the below link to create a new IBM-cloud VM.
https://www.ibm.com/developerworks/cloud/library/cl-softlayer-go-overview/index.html
func main() {
sess := session.New()
service := services.GetVirtualGuestService(sess)
guestTpl := datatypes.Virtual_Guest{
Hostname: sl.String("sample"),
Domain: sl.String("example.com"),
MaxMemory: sl.Int(2048),
StartCpus: sl.Int(1),
Datacenter: &datatypes.Location{Name: sl.String("sjc01")},
OperatingSystemReferenceCode: sl.String("UBUNTU_LATEST"),
LocalDiskFlag: sl.Bool(true),
}
guest, err := service.Mask("id;domain").CreateObject(&guestTpl)
if err != nil {
fmt.Println(err)
os.Exit(-1)
}
fmt.Printf("New Virtual Guest created with ID %d\n", *guest.Id)
fmt.Printf("Domain: %s\n", *guest.Domain)
}
The IBM approval mail comes after an hour , and the VM related updates are generated after that mail.
Is there a way to reduce the time ? Or is the IBM behavior takes longer time ?
Help is highly appreciated.
I use python for that task and one day I've faced that I need 1 hour per 1 server to create. So there is no technical solution. IBM somehow approves server creating manually.
I would recommend raise a support ticket to IBM. Now I have average 3 minutes to create a virtual server.
Just in case try to use servers like B1_1X2X100. To insure that you use standard fast setup.
Good luck
I'm using redis in Golang with the Redigo connector (https://github.com/garyburd/redigo) suggested by the Redis website.
I have:
After every Dial() I defer a Close()
Set fs.file-max = 100000
Set vm.overcommit_memory = 1
Disabled saving
Set maxclients = 100000
I run a high traffic website, and everything runs great for about 10 minutes, from which I get
error: dial tcp 127.0.0.1:6379: too many open files
Then I can't access redis at all from my application.
I see nothing in the redis logs to suggest any errors or problems. What do I do to fix this?
I don't know which driver you use but with redigo you can define a number of open connections in a pool, all you have to do then is in every query to redis, first get a client from the pool, then close it, so it goes back to the pool and gets re-used, just like this:
redisPool = &redis.Pool{
MaxIdle: 3,
MaxActive: 10, // max number of connections
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", ":6379")
if err != nil {
panic(err.Error())
}
return c, err
},
}
r := redisPool.Get() // get a client from the pool
_, err = r.Do("GET one") // use the client
if err != nil {
panic(err.Error())
}
r.Close() // close the client so the connection gets reused
Your problem is that Redis cant open new connections and then becomes unresponsive, you need to increase the file descriptors limit of your operating system (ubuntu defaults to 1024 which can be a problem) which right now is dominating the redis maxclients setting.
The way you tweak this limit depends on the os redis is running, on linux you need something like this:
ulimit -n 99999