I followed the example given in the below link to create a new IBM-cloud VM.
https://www.ibm.com/developerworks/cloud/library/cl-softlayer-go-overview/index.html
func main() {
sess := session.New()
service := services.GetVirtualGuestService(sess)
guestTpl := datatypes.Virtual_Guest{
Hostname: sl.String("sample"),
Domain: sl.String("example.com"),
MaxMemory: sl.Int(2048),
StartCpus: sl.Int(1),
Datacenter: &datatypes.Location{Name: sl.String("sjc01")},
OperatingSystemReferenceCode: sl.String("UBUNTU_LATEST"),
LocalDiskFlag: sl.Bool(true),
}
guest, err := service.Mask("id;domain").CreateObject(&guestTpl)
if err != nil {
fmt.Println(err)
os.Exit(-1)
}
fmt.Printf("New Virtual Guest created with ID %d\n", *guest.Id)
fmt.Printf("Domain: %s\n", *guest.Domain)
}
The IBM approval mail comes after an hour , and the VM related updates are generated after that mail.
Is there a way to reduce the time ? Or is the IBM behavior takes longer time ?
Help is highly appreciated.
I use python for that task and one day I've faced that I need 1 hour per 1 server to create. So there is no technical solution. IBM somehow approves server creating manually.
I would recommend raise a support ticket to IBM. Now I have average 3 minutes to create a virtual server.
Just in case try to use servers like B1_1X2X100. To insure that you use standard fast setup.
Good luck
Related
I implemented two different solution to discover service on my BLE device. One use a handler then return what .discoverService have found, the other one is really similar but give the size of the service discovered list that is always 0. I tried it with my realme buds 2 as test and some other device publically visible. The result is always 0. What can the problem be?
Handler(Looper.getMainLooper()).post {
var temp = bluetoothGatt?.discoverServices()
addGlog("discordservice() returned ${temp.toString()}")
}
addGlog("handler discover service reached an end")
val gattServices: List<BluetoothGattService> = gatt.getServices()
addGlog("Services count: " + gattServices.size)
for (gattService in gattServices) {
val serviceUUID = gattService.uuid.toString()
addGlog("Service uuid $serviceUUID")
}
edit: AddGlog is a simple log function to print results
answer: The code is not wrong but it take some time to discover those services so i put this code in a button. In this way there is 3-4 second of time between connecting with the device and make a discoveryservice operation. So a button make the conneting operations and another one the service discovery operations. I am sorry if my answer is pretty lame but I am still a noob on this topic
Background: I want to reduce the response time when working with SQL databases in my Go applications.
Golang provides an SQL package with a connection pool.
It includes the following configuration options:
func (db *DB) SetConnMaxIdleTime(d time.Duration)
func (db *DB) SetConnMaxLifetime(d time.Duration)
func (db *DB) SetMaxIdleConns(n int)
func (db *DB) SetMaxOpenConns(n int)
However, there is nothing like SetMinIdleConns to ensure that there are always some open connections that can be used when a request comes in.
As a result, my application has good response times under load, but the first request after some idle time always has some delay because it needs to open a new connection to the database.
Question: Is there a way to fix this with the Go standard library or is there some alternative connection pooling library for Go with this feature?
Workarounds and already tried:
I tried setting ConnMaxIdleTime and ConnMaxLifetime to very high values, but then the SQL server closes them and I get even higher delays or even errors on the first call after a long idle time.
Obviously, I could create a background task that periodically uses the database. However, this does not seem like a clean solution.
I am considering porting a connection pool library from another language to Go.
The pgx/pgxpool library provides a connection pool for Postgres, which allows to configure the minimum number of connections:
connConf, err := pgxpool.ParseConfig(connStr)
// ... (error handling)
connConf.MaxConns = 50
// Set minimum number of connections:
connConf.MinConns = 10
pool, err := pgxpool.ConnectConfig(context.TODO(), connConf)
Using the following code to load data in Aerospike. data is a list of maps of type BinMap
for _, binMap := range data {
id, ok := binMap["id"].(string)
key, _ := as.NewKey("test", "myset", id)
err := shared.Client.Put(nil, key, binMap)
if err !=nil {
fmt.Println(err)
}
After loading few records, the following error message is received.
command execution timed out on client: Exceeded number of retries.
See `Policy.MaxRetries`. (last error: Node not found for partition
test:711 in partition table.)
For each iteration, the partition test number changes.
The error continues even after waiting for 5 seconds after each Put command. I am not sure what timeout is reported in the error message.What client configuration is required for go client?
Using MacOs 10.15.3; go client; Aerospike running on docker 2.2.0.3
There's a good chance your cluster hasn't formed correctly, or that its networking isn't properly set up to give clients access to all the nodes. Since you're using Docker, take a look at Lucien's Medium post How do I get a 2 nodes Aerospike cluster running quickly in Docker without editing a single file?.
In MT4, there exists a stage/state: when we switch from AccountA to AccountB, when Connection is established and init() and start() are triggered by MT4; but before the "blinnnggg" (sound) when all the historical/outstanding trades are loaded from Server.
Switch Account>Establish Connection>Trigger Init()/Start() events>Start Downloading of Outstanding/Historical trades>Completed Downloading (issue "bliinng" sound).
I need to know (in MQL4) that all the trades are completed downloaded from the tradeServer --to know that the account is truly empty -vs- still downloading history from tradeServer.
Any pointer will be appreciated. I've explored IsTradeAllowed() IsContextBusy() and IsConnected(). All these are in "normal" state and the init() and start() events are all fired ok. But I cannot figure out if the history/outstanding trade lists has completed downloading.
UPDATE: The final workaround I finally implemented was to use the OrdersHistoryTotal(). Apparently this number will be ZERO (0) during downloading of order history. And it will NEVER be zero (due to initial deposit). So, I ended-up using this as a "flag".
Observation
As the problem was posted, there seems no such "integrated" method for MT4-Terminal.
IsTradeAllowed() reflects an administrative state of the account/access to the execution of the Trading Services { IsTradeAllowed | !IsTradeAllowed }
IsConnected() reflects a technical state of the visibility / login credentials / connection used upon an attempt to setup/maintain an online connection between a localhost <-> Server { IsConnected() | !IsConnected() }
init() {...} is a one-stop setup facility, that is/was being called once an MT4-programme { ExpertAdvisor | Script | TechnicalIndicator } was launched on a localhost machine. This facility is strongly advised to be non-blocking and non-re-entrant. A change from the user account_A to another user account_B is typically ( via an MT4-configuration options ) a reason to stop an execution of a previously loaded MQL4-code ( be it an EA / a Script / a Technical Indicator ) )
start() {...} is an event-handler facility, that endlessly waits, for a next occurrence of an FX-Market Event appearance ( being propagated down the line by the Broker MT4-Server automation ) that is being announced via an established connection downwards, to the MT4-Terminal process, being run on a localhost machine.
A Workaround Solution
As understood, the problem may be detected and handled indirectly.
While the MT4 platform seems to have no direct method to distinguish between the complete / in-complete refresh of the list of { current | historical } trades, let me propose a method of an indirect detection thereof.
Try to launch a "signal"-trade ( a pending order, placed geometrically well far away, in the PriceDOMAIN, from the current Ask/Bid-levels ).
Once this trade would be end-to-end registered ( Server-side acknowledged ), the local-side would have confirmed the valid state of the db.POOL
Making this a request/response pattern between localhost/MT4-Server processes, the localhost int init(){...} / int start(){...} functionality may thus reflect a moment, when the both sides have synchronised state of the records in db.POOL
I'm using redis in Golang with the Redigo connector (https://github.com/garyburd/redigo) suggested by the Redis website.
I have:
After every Dial() I defer a Close()
Set fs.file-max = 100000
Set vm.overcommit_memory = 1
Disabled saving
Set maxclients = 100000
I run a high traffic website, and everything runs great for about 10 minutes, from which I get
error: dial tcp 127.0.0.1:6379: too many open files
Then I can't access redis at all from my application.
I see nothing in the redis logs to suggest any errors or problems. What do I do to fix this?
I don't know which driver you use but with redigo you can define a number of open connections in a pool, all you have to do then is in every query to redis, first get a client from the pool, then close it, so it goes back to the pool and gets re-used, just like this:
redisPool = &redis.Pool{
MaxIdle: 3,
MaxActive: 10, // max number of connections
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", ":6379")
if err != nil {
panic(err.Error())
}
return c, err
},
}
r := redisPool.Get() // get a client from the pool
_, err = r.Do("GET one") // use the client
if err != nil {
panic(err.Error())
}
r.Close() // close the client so the connection gets reused
Your problem is that Redis cant open new connections and then becomes unresponsive, you need to increase the file descriptors limit of your operating system (ubuntu defaults to 1024 which can be a problem) which right now is dominating the redis maxclients setting.
The way you tweak this limit depends on the os redis is running, on linux you need something like this:
ulimit -n 99999