On Google Cloud, what is the API method to stop an instance (not the gcutil or manual method) - instance

Mysteriously, there appears to be no documented API call to stop a google cloud instance. In these docs:
https://developers.google.com/compute/docs/instances#stop_job
both the prior and following commands describe API calls to accomplishing, but not the very common task of shutting down an instance.
When I hacked the URL for getting GCE help on 'reseting' an instance, assuming the "delete" command probably existed, I go this valid page:
https://developers.google.com/compute/docs/reference/latest/instances/delete
but this talks about deleting "instance resources" rather than instances themselves. Confusing (to me).
So, is there, or is there not, an API call to shut down a google cloud VM instance?

Would this be what you are looking for ?
https://developers.google.com/compute/docs/api/python-guide#stoppinganinstance
Python API to stop Google Compute Engine Instance.
This would 'stop' the instance. As deleting an instance is the recommended way to stop an instance.

Related

AWS EC2 Instance Profile for S3 permissions inconsistent

Background: I'm writing an automated deployment script to deploy a ruby on rails application to AWS on an EC2 instance using S3 as the storage for ActiveStorage. My script creates an instance profile/role and attaches it to the EC2 instance on creation. My script uses the ruby sdk for AWS.
Sometimes when my script runs, it works great (which tells me my configuration is correct). Sometimes it throws the following exception:
/home/ubuntu/.rbenv/versions/2.6.5/lib/ruby/gems/2.6.0/gems/aws-sigv4-1.2.1/lib/aws-sigv4/signer.rb:613:in `extract_credentials_provider': missing credentials, provide credentials with one of the following options: (Aws::Sigv4::Errors::MissingCredentialsError)
- :access_key_id and :secret_access_key
- :credentials
- :credentials_provider
I generally have success about 9 times out of 10 using a t3a.micro or t3.micro instance. I usually have a failure 9 times out of 10 using a t3a.nano or t3.nano instance.
It sure seems like there is something eventually consistent about these instance profiles, but I can't find anything in the documentation. What's going on, and what can I do to make this succeed consistently?
Thank you.

"The search engine appears to be down or failing to respond to the search query"

I've installed FusionAuth (awesome product) into a Docker Swarm cluster using the official docker-compose.yml file and everything seems to work brilliantly.
EXCEPT
Periodically, when a user goes to login they will be presented with the above error stating that the search engine is not available. If they try again immediately then everything works correctly! I would, obviously, prefer that they never saw the error.
Elasticsearch is definitely running and is responding to API calls correctly, and I can see the fusionauth_user index is present and populated with docs.
I guess my question is two fold:
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
I've search the docs for answers to the above but I can't seem to find anything :-(
Thanks for the kind feedback.
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
Elasticsearch provides full text search of user data. Each time a user is created or updated the user is re-indexed. In this case during login, we are updating the search index with the last login instant.
This service is required and cannot be disabled. We have had clients request to make this service optional for embedded applications or small scale scenarios where Elasticsearch may not be required. While this is not currently in plan, it is possible we may revisit this option in the future.
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
Not currently.
Full disclosure, I am not a Docker or Docker Swarm expert at all - perhaps there are some nuances to Swarm and response time due to spin up and spin down of resources?
Do you see any exceptions in the log when a user sees this error on the login?

Debugging Parse Cloud-Code

What would be the best way to debug Parse Cloud Code? Currently it's a mess of logging to the console and checking logs. Does anyone have a good workable solution?
During development, you should begin by testing against a local hosted server. I.e., I use VS Code. You can set breakpoints and watch variables for their values. You can set up a tool like ngrok to get a remote URL for your local endpoint so you can test with non-local hosted clients if you'd like.
We also use Slack extensively. We've created our own slack bot, and it has several channels it reports relevant information too, triggered from our parse-server. One of these is a dev error channel. Instead of console.logs, which are hard to sift through and find what you're looking for, we push important information to Slack. We don't switch every single console.log to a slack message, just the important "Hey something went wrong here's the information" messages. This brings them to our attention so we can identify and resolve them way faster. Slack is awesome. I recommend using slack, even on a solo project.
at the moment you can access your Logs using a console.log() or console.error() for functions and all general logs of everything that happens with your app, at Back4App you can access using: Server Settings -> Logs -> Settings -> Server System Log.
Or functions and all logs generated by Parse server, they're: request.log.info() and request.log.error(), at Back4App you can access using: Dashboard -> Logs.

Google Cloud Dataflow error: "The Application Default Credentials are not available"

We have a Google Cloud Dataflow job, which writes to Bigtable (via HBase API). Unfortunately, it fails due to:
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information. at com.google.bigtable.repackaged.com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentials(DefaultCredentialsProvider.java:74) at com.google.bigtable.repackaged.com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:54) at com.google.bigtable.repackaged.com.google.cloud.config.CredentialFactory.getApplicationDefaultCredential(CredentialFactory.java:181) at com.google.bigtable.repackaged.com.google.cloud.config.CredentialFactory.getCredentials(CredentialFactory.java:100) at com.google.bigtable.repackaged.com.google.cloud.grpc.io.CredentialInterceptorCache.getCredentialsInterceptor(CredentialInterceptorCache.java:85) at com.google.bigtable.repackaged.com.google.cloud.grpc.BigtableSession.<init>(BigtableSession.java:257) at org.apache.hadoop.hbase.client.AbstractBigtableConnection.<init>(AbstractBigtableConnection.java:123) at org.apache.hadoop.hbase.client.AbstractBigtableConnection.<init>(AbstractBigtableConnection.java:91) at com.google.cloud.bigtable.hbase1_0.BigtableConnection.<init>(BigtableConnection.java:33) at com.google.cloud.bigtable.dataflow.CloudBigtableConnectionPool$1.<init>(CloudBigtableConnectionPool.java:72) at com.google.cloud.bigtable.dataflow.CloudBigtableConnectionPool.createConnection(CloudBigtableConnectionPool.java:72) at com.google.cloud.bigtable.dataflow.CloudBigtableConnectionPool.getConnection(CloudBigtableConnectionPool.java:64) at com.google.cloud.bigtable.dataflow.CloudBigtableConnectionPool.getConnection(CloudBigtableConnectionPool.java:57) at com.google.cloud.bigtable.dataflow.AbstractCloudBigtableTableDoFn.getConnection(AbstractCloudBigtableTableDoFn.java:96) at com.google.cloud.bigtable.dataflow.CloudBigtableIO$CloudBigtableSingleTableBufferedWriteFn.getBufferedMutator(CloudBigtableIO.java:836) at com.google.cloud.bigtable.dataflow.CloudBigtableIO$CloudBigtableSingleTableBufferedWriteFn.processElement(CloudBigtableIO.java:861)
Which makes very little sense, because the job is already running on Cloud Dataflow service/VMs.
The Cloud Dataflow job id: 2016-05-13_11_11_57-8485496303848899541
We are using bigtable-hbase-dataflow version 0.3.0, and we want to use HBase API.
I believe this is a known issue where GCE instances are very rarely unable to get the default credentials during startup.
We have been working on a fix which should be part of the next release (1.6.0) which should be coming soon. In the meantime we'd suggest re-submitting the job which should work. If you run into problems consistently or want to discuss other workarounds (such as backporting the 1.6.0 fix) please reach out to us.
1.7.0 is released so this should be fixed now https://cloud.google.com/dataflow/release-notes/release-notes-java

SCOPES_WARNING in BigQuery when accessed from a Cloud Compute instance

Every time I use bq on a Cloud Compute instance, I get this:
/usr/local/share/google/google-cloud-sdk/platform/bq/third_party/oauth2client/contrib/gce.py:73: UserWarning: You have requested explicit scopes to be used with a GCE service account.
Using this argument will have no effect on the actual scopes for tokens
requested. These scopes are set at VM instance creation time and
can't be overridden in the request.
warnings.warn(_SCOPES_WARNING)
This is a default micro in f1 with Debian 8. I gave this instance access to all Cloud APIs and its service account is also an owner of a project. I run gcloud init. But this error persists.
Is there something wrong?
I noticed that this warning did not appear on an older instance running SDK version 0.9.85, however I now get it when creating a new instance or upgrading the the latest Gcloud SDK.
The scopes warning can be safely ignored, as it's just telling you that the only scopes that will be used are the ones specified at instance creation time, which is the expected behavior of the default GCE service account.
It seems the 'bq' tool doesn't distinguish between the default service account on GCE and a regular service account and always tries to set the scopes explicitly. The warning comes from oauth2client, and it looks like it didn't display this warning in versions prior to v2.0.0.
I've created public issue to track this which you can star to get updates:
https://code.google.com/p/google-bigquery/issues/detail?id=557