Is it necessary to create users and database instances sequentially while using Infrastructure as Code? - sql

I was seeing this Deployment Manager template on Google Foundation Toolkit on Github
I am not able to get it what this line is for.
Github Repo

That is a reference to another function within the template (line 184). The reason this does it sequentially is that the API call to add a user to the DB only handles adding a single user at a time. The act of adding a user to CloudSQL through API cannot be done in a batch. Since DM creates all these resources through API call, the sequential calls are required

Related

How to start a VM instance using Cloud Scheduler

Background and Goal
I have a Debian/Linux VM on GCP which I manually start every morning and after it runs, it shuts down by itself using a Linux command. I want to automate the start of the VM by using the Cloud Scheduler. The question asked in GCP auto shutdown and startup using Google Cloud Schedulers has several answers and I am interested in pursuing the answer (https://stackoverflow.com/a/65062924/10322004) proposed by #nikelone because it seems to be simple and also it has been endorsed by #Damien and #RayFoss as being easy. I am a neophyte in these matters and I could not comprehend their replies fully. So this post was created to elicit more clear answers for a person like me.
What I have tried
I have gone to https://cloud.google.com/compute/docs/reference/rest/v1/instances/start (call this page A) and tried the API and was able to successfully start my already stopped VM when I clicked on the execute button. I presume that this means that my entries were fine and can be used in conjunction with appropriate software like Cloud Scheduler to perform the start function on a predefined schedule. But the problem is that I do not know or understand how to proceed from here. I give below my questions.
My Questions
On page A, the last three paragraphs are titled Authorization Scopes, IAM permissions, and Examples, and none of them say anything specific about what the user should do. Is it correct to assume that they have nothing to do with the Cloud Scheduler, but related to other methods to achieve the same goal? If this is not correct then my next question is what should I be doing to follow the statements in these three paragraphs?
Assuming that the answer to question 1 is "yes", meaning I can now start scheduling with the Cloud Scheduler, I next looked at the quickstart for Cloud Scheduler at https://cloud.google.com/scheduler/docs/quickstart (call this page B). The list of items to do is quite large including installing Cloud SDK, running a quite a few commands on the console, enabling some features, set up Pub/Sub, create a job, run the job and verify the results in Pub/Sub. This looks like a daunting set of tasks and I could not understand why it is necessary to jump through the hoops to use something that has already been achieved with just a few keystrokes earlier. So are these steps all necessary? Or is there a way to use the Cloud Scheduler directly without going through so many intermediate steps?
Now assume that the answer to question 2 is that I have to perform all steps stated on page B. If I run into some problem while accomplishing the tasks outlined on page B, my VM may get messed up irretrievably. Is there a way in which the Cloud Platform or its components can be used to reset my VM to its current state as of today, which is working fine? I really do not want to end up with something worse than what I have now.
To answer your questions:
Auth Scopes and IAM permissions are required for you to call the Compute Engine API methods such as instance.start & instance.stop. You need to set the right scope and the right IAM permission on your job or else it will fail. They are indeed related to the method that you're interested to call so you must keep them in mind. What you see on the examples are the ways to call the {API} using different programming languages so you don't need to pay attention to them as you will create the job through the Cloud Console. To further address this part, see the full steps I included below.
The answer that you're trying to follow uses HTTP target while the quickstart you've linked uses Pub/Sub and they are different with each other because they have separate use cases. This link shows a proper instruction how to create a scheduler job with an HTTP target. You can create this kind of job straight from the Cloud Console or a one-liner gcloud command. If your config is incorrect, the trigger will not execute the endpoint URL and you will see an error that you must fix.
Addressed on answer #2
Basically, you just need to follow the instructions to the link you've sent. However, I'll post it here as well along with my explanation:
Go to https://cloud.google.com/scheduler. Click on Go to Console. Click on Create Job. Fill up the required fields (those with red asterisks) when creating a Scheduler Job.
Select HTTP as target type.
Enter this as your URL (modify the capitalized words).
https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/INSTANCE_ZONE/instances/INSTANCE_NAME/start
Choose HTTP method POST.
Click show more and choose Auth Header "Add OAuth Token"
Enter your service account. This is used to pass an OAuth Token when your scheduler job calls the Compute API. Make sure that the service account you will use have the "Compute Instance Admin" role because this role contains the permissions to start/stop your instance. See this instruction how to grant access on a service account. If you're not sure what service account to use, feel free to use the Compute Engine default service account.
Add this on Scope:
https://www.googleapis.com/auth/cloud-platform
The description of this scope:
See, edit, configure, and delete your Google Cloud Platform data.
Repeat for Stop instance job and change URL in #3.

Is there an API for purging a project in the openstack?

I need to purge my users on an OpenStack project easily, through an API call.
Just like this CLI command :
neutron purge PROJECT_ID
Which is available in the Neutron project docs, but with an API call.
I couldn't find the API, so actually my question is :
1. Isn't there such an API?
if there is not,
2.why?
Is there a specific reason for?
I checked out the source-code of the clients and neutron-server, but unfortunately there is no dedicated endpoint in the REST-API for this functionality.
This feature is only supported by the neutron-client, but not by the openstack-client. When you run neutron purge PROJECT_ID all what the neutron-client does inside the python-code of the client, is to list all resources which are related to the given project and then iterate over this list and send a delete to the neutron REST-API for each single resource. So its only a simple automatism in the python-code of the client and not a specific endpoint on server-side.
See the specific function inside the code here: https://github.com/openstack/python-neutronclient/blob/master/neutronclient/neutron/v2_0/purge.py#L63
Based on my experience with openstack and its community, I think it was done like this, because it was easier to add new code only into the neutron-client. When this should have become a new endpoint, this feature had to be implemented in neutron, openstack-client and openstacksdk as well. Each repository has its own team. This purge-feature is so small, that it was not worth to persuade all 4 teams. The more components you try to update for one simple feature, the harder it is, because the one who wants to bring the feature upstream, is responsible to bring the teams of all required components together and when only one within the core-teams have a problem with your implementation, you have to start nearly at the beginning. So it can easily take over a year or two to bring a cross-component feature like a new endpoint upstream, when you are not part of the core-team by yourself. So to bring the feature only into the neutron-client is quite easy compared to a cross-project contribution.
This is at least the reason, why I would implement this feature only in neutron-client too, or only in the openstack-client if possible, instead of adding a new endpoint, when I would bring this feature upstream.

Wufoo: update entry using API

Using Wufoo's API, is it possible (and if so how) to retrieve a single entry and update the information in it (without submitting it as another entry)? I can't seem to find any information on the Wufoo API website. If this isn't possible, any suggestions as to to work around this (such as using a local db). I'd like to build a hybrid app that authenticates locally and uses Wufoo for the data collection.
Thanks :)
I've been working on a similar kind of project and ran into the same issue. After submitting a help ticket, I was informed that the wufoo API does NOT support this function (update). Any updates would have to be done externally.
Like you, I would like to store my data in wufoo rather than externally, so I'm working on a few scripts that will serve as webhook endpoints for my wufoo forms. Whenever a form is submitted, one of these scripts will receive that data, do stuff with it, then use the API to relay the modified data back to a second wufoo form that "shadows" the original (same fields if needed, or new fields that reflect the processing I did).
This second set of forms would be the final destination for the data and only be accessed by my code. The first set of forms would only be accessed by live users. In a nutshell, it's a huge feedback loop that uses webhooks to trigger the processing.
Hope this helps.
for more info on webhooks, see http://help.wufoo.com/articles/en_US/SurveyMonkeyArticleType/Webhooks?q=webhook&fs=Search&pn=1
for more info on the wufoo api Entries api (get & post), see http://help.wufoo.com/articles/en_US/SurveyMonkeyArticleType/The-Entries-API

Outer API request data storage in Rails 3

I have a Rails 3 app with API to the central app that provide some data.
I've drop an API realization into lib/ folder and found I can't access session method directly. So the question - how could I access the session from library?
I think that accessing to session from lib or a model is not a good idea. Session information should only be used directly from controllers;
If you need to use session information in a model or within a library, it's a better idea to process the session information in the controller and pass it as parĂ¡meters.
There are several reasons behind it (mainly OOD), but a hint to detect the problem could be that testing the object would need to create a session object, and that is not a good practice in TDD.

How to add custom field in salesforce leads through rest api?

Hi
I am currently working on an application which is implementing salesforce.com REST api. I have done all the authentication part and received all the info needed. Now I want my application to push the leads into the customer's account. But the lead fields can be customized and different customer use their different custom fields, so any1 can suggest me how to add that custom field in my form which will be pushed to the customer's salesforce account.
Thanks
You can use the describe resource in the REST API to obtain the metadata about the Load object, including all the fields.
https://{someinstance}.salesforce.com/services/data/v20.0/sobjects/lead/describe
You can use the list of fields to drive your form, and to control what you subsequently POST to /services/data/v20.0/sobjects/lead to create the new lead.
Here's the link to the REST API pilot docs incase you haven't seen them
No idea if the REST API supports describe() calls, they're a way of querying for all metadata about table (columns, their types etc). In normal webservice API this info can be found at http://www.salesforce.com/us/developer/docs/api/index_Left.htm#StartTopic=Content/sforce_api_calls_describesobject.htm#topic-title
the Metadata REST API was put on hold by Salesforce, so there is no REST API for Metadata (there is some limited support in the Tooling REST API, but not enough to create an object).
https://salesforce.stackexchange.com/questions/20763/creating-a-custom-object-using-rest-api