Background and Goal
I have a Debian/Linux VM on GCP which I manually start every morning and after it runs, it shuts down by itself using a Linux command. I want to automate the start of the VM by using the Cloud Scheduler. The question asked in GCP auto shutdown and startup using Google Cloud Schedulers has several answers and I am interested in pursuing the answer (https://stackoverflow.com/a/65062924/10322004) proposed by #nikelone because it seems to be simple and also it has been endorsed by #Damien and #RayFoss as being easy. I am a neophyte in these matters and I could not comprehend their replies fully. So this post was created to elicit more clear answers for a person like me.
What I have tried
I have gone to https://cloud.google.com/compute/docs/reference/rest/v1/instances/start (call this page A) and tried the API and was able to successfully start my already stopped VM when I clicked on the execute button. I presume that this means that my entries were fine and can be used in conjunction with appropriate software like Cloud Scheduler to perform the start function on a predefined schedule. But the problem is that I do not know or understand how to proceed from here. I give below my questions.
My Questions
On page A, the last three paragraphs are titled Authorization Scopes, IAM permissions, and Examples, and none of them say anything specific about what the user should do. Is it correct to assume that they have nothing to do with the Cloud Scheduler, but related to other methods to achieve the same goal? If this is not correct then my next question is what should I be doing to follow the statements in these three paragraphs?
Assuming that the answer to question 1 is "yes", meaning I can now start scheduling with the Cloud Scheduler, I next looked at the quickstart for Cloud Scheduler at https://cloud.google.com/scheduler/docs/quickstart (call this page B). The list of items to do is quite large including installing Cloud SDK, running a quite a few commands on the console, enabling some features, set up Pub/Sub, create a job, run the job and verify the results in Pub/Sub. This looks like a daunting set of tasks and I could not understand why it is necessary to jump through the hoops to use something that has already been achieved with just a few keystrokes earlier. So are these steps all necessary? Or is there a way to use the Cloud Scheduler directly without going through so many intermediate steps?
Now assume that the answer to question 2 is that I have to perform all steps stated on page B. If I run into some problem while accomplishing the tasks outlined on page B, my VM may get messed up irretrievably. Is there a way in which the Cloud Platform or its components can be used to reset my VM to its current state as of today, which is working fine? I really do not want to end up with something worse than what I have now.
To answer your questions:
Auth Scopes and IAM permissions are required for you to call the Compute Engine API methods such as instance.start & instance.stop. You need to set the right scope and the right IAM permission on your job or else it will fail. They are indeed related to the method that you're interested to call so you must keep them in mind. What you see on the examples are the ways to call the {API} using different programming languages so you don't need to pay attention to them as you will create the job through the Cloud Console. To further address this part, see the full steps I included below.
The answer that you're trying to follow uses HTTP target while the quickstart you've linked uses Pub/Sub and they are different with each other because they have separate use cases. This link shows a proper instruction how to create a scheduler job with an HTTP target. You can create this kind of job straight from the Cloud Console or a one-liner gcloud command. If your config is incorrect, the trigger will not execute the endpoint URL and you will see an error that you must fix.
Addressed on answer #2
Basically, you just need to follow the instructions to the link you've sent. However, I'll post it here as well along with my explanation:
Go to https://cloud.google.com/scheduler. Click on Go to Console. Click on Create Job. Fill up the required fields (those with red asterisks) when creating a Scheduler Job.
Select HTTP as target type.
Enter this as your URL (modify the capitalized words).
https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/INSTANCE_ZONE/instances/INSTANCE_NAME/start
Choose HTTP method POST.
Click show more and choose Auth Header "Add OAuth Token"
Enter your service account. This is used to pass an OAuth Token when your scheduler job calls the Compute API. Make sure that the service account you will use have the "Compute Instance Admin" role because this role contains the permissions to start/stop your instance. See this instruction how to grant access on a service account. If you're not sure what service account to use, feel free to use the Compute Engine default service account.
Add this on Scope:
https://www.googleapis.com/auth/cloud-platform
The description of this scope:
See, edit, configure, and delete your Google Cloud Platform data.
Repeat for Stop instance job and change URL in #3.
Related
I am planning to do a large scale crowdsourcing experiment on MTurk and would therefore like to do this using the API and Python, since I am very interested in the ReviewPolicies Feature. I tested this (not only in the sandbox, but also in the marketplace) and can't find my created test HIT (reward set to 0.01).
What could be the reason for this? Also I read in in some prior tasks, that HITs created with the API are not visible in the developer interface. But they must be visible to workers on the website interface, aren't they? If not, how will these HITs be found by workers on the dashboard/marketplace?
I published the HIT successfully on the marketplace (API) and I can see the HIT response. I expected to find this specific HIT also on the dashboard (signed in as a worker).
This is my first post/question. If I missed an existing thread that answers my question, I missed that thread in my search and definitely appreciate you linking me! Please let me know if I should be posting/asking this elsewhere....
My question relates to Salesforce.
I have a use case where a client has a monthly batch of files that need to be made available on various cloud-based storage/distribution platforms like Box and Dropbox but also other less ubiquitous tools specific to the sector. Currently, the client is logging into each distribution platform, one-at-a-time, and uploading the files; then, if at any point any files need to be updated or removed/restricted, the client logs into each platform one-by-one and takes the necessary action. Obviously the process being described is tedious/laborious and leaves multiple gaps for error. The client and I are discussing a solution that would allow for create/read/update/delete actions in all of the distribution platforms without having to leave their Salesforce org. I am aware of existing AppExchange integrations for Box, Dropbox, etc. but they don’t quite do everything we need (to my knowledge)—they tie-in nicely and there are use cases where they are powerful tools...but—my understanding of those existing integration is that they would still each require dedicated tabs within the Object and repeated ‘drags’ and ‘drops’ of the same files to each tab. Again, the end goal here is that, for example, the client wants to drag and drop one time and have it pushed to the various platforms, etc. Or another example is they would like to choose "delete" one time from within Salesforce and have the file removed/restricted on all distribution platforms.
I am a certified SF Admin 1, so...perhaps this should be in my wheelhouse but...I feel unsure how to approach. My feeling is this is asking for a combination of integrations via API and Process/Flow work, but I am hoping for some ideas/input/guidance. Any insight or help any of you have to offer would be so greatly appreciated!!
Thanks so much!
I want to get two statistics (Visits today, visits total) from my GoogleAnalytics account.
I checked Google Analytics resources such as
https://developers.google.com/analytics/devguides/reporting/?hl=en
https://developers.google.com/analytics/devguides/reporting/core/v3/reference
But it seems pretty time-consuming to get a certain ID, oAuth and everything working.
I do not need any user authentication, just an API request from my backend (GA authentication should be provided via request url for example).
To be honest, I found myself jumping from one link to another when doing tutorial and did not accomplish anything at the end.
What is the quickest way to get everything working? If there is a nice tutorial on getting JUST basic (two numbers) stuff from GoogleAnalytics I would be very grateful (everything I see is working almost as GA itself - just with custom styles/graphs etc. I need plain and simple number returned via REST api for instance.)
Thanks for any info!
Auth is understandably complicated, but it sounds like you need service account authentication since you are querying your own data and need it to run on the back end.
The quickest way to get from zero to querying the API is to follow the Hello Analytics Guide. I have linked you to the PHP service account page. But there are examples of service accounts, web apps, and installed apps in four different languages.
But an outline of the main steps are
Create a project in the Google developer console
Within that project create a service account and download the p12 file.
add that service account email to the particular view you wish to query.
You are now ready to modify the Hello Analytics example.
Below is a simple query for the number sessions today:
function getResults(&$analytics) {
return $analytics->data_ga->get(
'ga:XXXX', // Replace with your view ID.
'today',
'today',
'ga:sessions');
}
Feel free to ask any clarifying questions in the comments section.
I am implementing a website on which the recruited MTurk workers will perform tasks. I plan to recruit workers using MTurk tasks, using which I will redirect them to an external website for actual work. I have the following questions relating to this plan.
Is there any foreseeable problems with this approach of running HITs? If so, how can we mitigate them?
how should I implement the authentication procedure on my external site? For example, how can I make sure the people who come to the website to perform a specific task are indeed the same group of people recruited earlier for this particular task on MTurk?
when the workers finish the task, how should I integrate the payment procedure with MTurk based on their performance? For example, say worker is owed $3 after finishing the task on my external site, is it possible for me to tell MTurk to pay him/her this amount programmatically?
The external site will be built using Python, if such detail matters.
Any suggestions and comments based on your experiences and insights in using MTurk would be much appreciated!
I am thinking through this for a similar project of mine. I've experimented as a worker myself. Here is my plan, I hope it is of use to you. (I have not implemented it yet. It is based on an academic HIT I participated in as a worker.) Here goes:
A. Create a template that has language something like:
1. Please open this web site in a new browser window:
http://your-url.xyz.blah/tasks/${token}
2. Read and follow the instructions there.
3. After completing the task, you will receive a confirmation code. Paste
it here: [________]
B. Create some random tokens for your Mechnical Turk data file:
1A1B43B327015141
09F49F2D47823E0C
B5C49A18B3DB56F4
4E93BB63B0938728
CCE7FA60BFEB3198
...
(Generate these tokens from your app; it needs to cross-reference them.)
C. Your app extracts the token from URL, looks up the task, and does whatever it needs to do. I personally don't worry about people stumbling onto a URL, since it is a one-time use token.
D. After a user completes the task on the external web site, the external app gives a confirmation code. The confirmation code should be random and opaque. Only your application will know if any particular code corresponds to a correct or incorrect answer. In fact, if you want, the correctness may not even be determined in real time -- it could be the result of an aggregation and/or comparison across multiple submissions.
E. Write some code to interact programmatically. Take the token and confirmation code supplied from the MTurk result and make sure they match with your external app. If they don't match, reject the HIT. If they match, check the correctness in your external app and approve or reject. You might consider a bonus pay structure.
So, to answer your particular questions:
I don't anticipate problems with the approach I described. That said, Mechanical Turk is both an art and a science. Perhaps more art. Writing good questions and paying Turkers appropriately is something you have to figure out with a combination of common sense, market research, and experimentation.
See (C) above. A token is designed to only be used once. Use long enough tokens and the probability of collision becomes very low.
See (E) above. The Mechanical Turk Developer Guide is a good place to start.
Please share your results back. Or have the Turkers send StackOverflow hundreds of postcards. :)
Notes:
I'm currently exploring qualification tests. I suspect they can be very useful.
I want to get a Turker's Worker ID in my external application, but I haven't figured that part out yet. I'm reading up on it; for example: Getting workerId by assignmentId
I am thinking about using the ExternalQuestion feature from the API: "... you can host the questions on your own web site using an "external" question. ... A HIT with an external question displays a web page from your web site in a frame in the Worker's web browser. Your web page displays a form for the Worker to fill out and submit. The Worker submits results using your form, and your form submits the results back to Mechanical Turk. Using your web site to display the form gives your web site control over how the question appears and how answers are collected."
You might also find PsiTurk to be useful: "PsiTurk is an open platform for conducting custom behvioral experiments on Amazon's Mechanical Turk. ... It is intended to provide most of the backend machinery necessary to run your experiment. It uses AMT's External Question HIT type, meaning that you can collect data using any website. As long as you can turn your experiment into a website, you can run it with PsiTurk!"
I am trying to use BetterAuthorizationSample rather then go the so called "malicious" way of using setuid in order to get root privileges.
Currently I am using AuthorizationCreate(); with BLAuthentication to have root access to changing some files, but I am somewhat irritated by the fact that I have to constantly enter my password in every time the app launches.
So I came across Apple's method of a HelperTool, and I just can't figure it out.
I've been working with Cocoa for a couple months now, but this is just out of my reach, yet I still need it. How would I implement this tool to do simple root-privileged tasks?
Is there a simpler way to use the concept of a HelperTool, so that my users can just enter their password once and it would grant root-privileges forever?
The "modern" way to do a helper tool on Mac OS X is to ship it as part of your app, and use the ServiceManagement framework to deploy it. Your users enter their password once, when deploying the tool. That installs it as a launchd job; from then on you use any launchd on-demand mechanism to launch the helper and get it to do work for you.
Notice that the blog post linked above recommends that you protect subsequent invocations of the helper with an Authorization Services escalation, to avoid having an arbitrary privilege escalation that anyone can use. This seems like it somewhat impacts the "users can just enter their password once" benefit, although you can use AuthorizationRightSet() to create your app's authorization token in the policy database, so you can actually define whether users need to present passwords on first deployment.
The sample code from that post is on GitHub, and demonstrates using ServiceManagement to deploy the helper tool and Authorization Services to control access to it.