Google Deployment Manager Cloud Scheduler type - api

I see there's no schedule type provided by GCP. I'd like to know the steps to create a template, a composite-type or similar, to provide Cloud Scheduler type. I know Google already provides an example about it.
If it's posible to do so by code It could make use of the python client library though it says in the documentation this library is not available, I could inline it in the code.
I cannot think of a way to authenticate against the google API to do such requests.
In short, my question is how can make Deployment Manager type for Cloud? I know it is sort of vague. Just want to know if it would be doable.
On the other hand, where can I find the official development for this
GCP service?
For completenesss here's the related Github issue too

Cloud Scheduler type is not supported yet according to GCP's documentation.
I am not aware of any official development for this GCP service other than the one I linked above. That being said, I will create a feature request for your use case. Please add any additional that I have missed and you may use the same thread to communicate with the deployment manager team.

I was looking for this functionality and thought I should give an up to date answer on the topic.
Thanks to https://stackoverflow.com/users/9253778/dany-l for the feature request which led me to this answer.
It looks like this functionality is indeed provided, just that the documentation has yet to be updated to reflect it.
Here's the snippet from https://issuetracker.google.com/issues/123013878:
- type: gcp-types/cloudscheduler-v1:projects.locations.jobs
name: <YOUR_JOB_NAME_HERE>
properties:
parent: projects/<YOUR_PROJECT_ID_HERE>/locations/<YOUR_REGION_HERE>
name: <YOUR_JOB_NAME_HERE>
description: <YOUR_JOB_DESCRIPTION_HERE>
schedule: "0 2 * * *" # daily at 2 am
timeZone: "Europe/Amsterdam"
pubsubTarget:
topicName: projects/<YOUR_PROJECT_ID_HERE>/topics/<YOUR_EXPECTED_TOPIC_HERE>
data: aGVsbG8hCg== # base64 encoded "hello!"

You can use general YAML file with deployment-manager:
config.yaml:
resources:
- name: <<YOUR_JOB_NAME>>
type: gcp-types/cloudscheduler-v1:projects.locations.jobs # Cloud scheduler
properties:
parent: "projects/<<YOUR_PROJECT_NAME>>/locations/<<YOUR_LOCATION_ID>>"
description: "<<JOB_DESCRIPTION_OPTIONAL>>"
schedule: "* */2 * * *" # accepts 'cron' format
http_target:
http_method: "GET"
uri: "<<URI_TO_YOUR_FUNCTION>>" # trigger link in cloud functions
You even can add to create a Pub/Sub job and other with deployment-manager just add :
- name: <<TOPIC_NAME>>
type: pubsub.v1.topic
properties:
topic: <<TOPIC_NAME>>
- name: <<NAME>>
type: pubsub.v1.subscription
properties:
subscription: <<SUBSCRIPTION_NAME>>
topic: $(ref.<<TOPIC_NAME>>.name)
ackDeadlineSeconds: 600
NOTE: to get <<YOUR_LOCATION_ID>> use gcloud app describe.
To deploy use:
gcloud deployment-manager deployments create <<DEPLOYMENT_NAME>> --config=<<PATH_TO_YOUR_YAML_FILE>>
To delete use:
gcloud deployment-manager deployments delete <<DEPLOYMENT_NAME>> -q
For more properties on Cloud Scheduler read the documentation:
https://cloud.google.com/scheduler/docs/reference/rpc/google.cloud.scheduler.v1#google.cloud.scheduler.v1.HttpTarget

Related

How to detect in GitLab CI that a pipeline was triggered when a Merge Request was created

I'm doing a script that sends a message in Ryver based on certain events for GitLabs Merge Request. The supported scenarios:
When a Merge Request is created
When comments are made (Code Review)
when new commits make the pipeline fail
The following allows to limit the pipeline to merge requests only:
only:
- merge_requests
script:
- ./ryver.sh #This does the logic of sending the message based on the events
I tried using export to print all the variables in the pipeline but couldn't find a way to explicitly detect the event that triggered this job (Code Review, Creation, etc).
I tried:
Merge Request status
Comparing commits
Using times (not very reliable way)
I wonder:
Can we detect what event triggered the pipeline within the Merge Request scope?
Or
Do I need to use a WebHook to access this information?
Or
There is another way to do what my team is trying to do?
I'm open to suggestions or other ways to do it that are not related to the gitlab-ci.yml, as long as it is free
Can we detect what event triggered the pipeline within the Merge Request scope?
Yes, this is contained in the predefined CI_PIPELINE_SOURCE variable, as frakman1 answered.
Do I need to use a WebHook to access this information?
It depends what you want to do. as stated above, this information is available inherently in the pipeline as a predefined variable.However, it should be noted that only certain events trigger merge request pipelines at all.
For example, comments on merge requests do not trigger pipelines. If you want to react to comments or status changes, you will probably need a webhook.
The pipeline source information is available both in webhooks and the list pipelines API in the source key of the response (in GitLab 14.3+).
Webhooks expose the event in the X-Gitlab-Event header and payload for the relevant events.
There is another way to do what my team is trying to do?
Webhooks will likely be more reliable than relying on in-pipeline jobs because webhooks can capture more events than jobs/pipelines can. Webhooks, for example, could send you notifications even when no pipeline is created at all. It will also work if your pipeline is blocked/timed out for some reason.
The disadvantage, however, is that you will need to develop and host your own web application to handle the incoming webhooks.
There are many project integrations built into GitLab for sending notification webhooks directly to various services. Unfortunately, Ryver is not one of them.
If you want to send notifications from jobs, I have found using apprise simplifies this greatly and supports ryver.
A basic template job may look like this:
.send_notification:
image: python:3.9-slim
before_script:
- pip install apprise
variables:
RYVER_ORG: "YourOrganization"
# Define RIVER_TOKEN in your CI/CD variables settings
NOTIFICATION_TITLE: "Placeholder Title"
NOTIFICATION_BODY: "Placehodler Body"
script:
- apprise -vv -t "${NOTIFICATION_TITLE}" -b "${NOTIFICATION_BODY}" "ryver:///${RYVER_ORG}/${RYVER_TOKEN}"
Using jobs in combination with when: on_failure or when: on_success can be useful:
stages:
- build
- build_notification
build:
stage: build
script:
- make build
notify_build_failure:
stage: build_notification
when: on_failure
extends: .send_notification
variables:
NOTIFICATION_TITLE: "Failed - $CI_PROJECT_NAME pipeline $CI_PIPELINE_ID"
NOTIFICATION_BODY: "The build failed for the pipeline. See $CI_PIPELINE_URL"
notify_build_success:
stage: build_notification
when: on_success # the default
extends: .send_notification
variables:
NOTIFICATION_TITLE: "Build Success - $CI_PROJECT_NAME pipeline $CI_PIPELINE_ID"
NOTIFICATION_BODY: "The build has succeeded. See $CI_PIPELINE_URL"
Or using a default after_script which will run even if the job fails. This is an easy way to have your ryver.sh script evaluated after each job. The script logic can determine whether to send the notification and the contents of the notification.
default:
after_script:
- ./ryver.sh
You can use logic like this to detect that the commit came from a Merge Request:
$CI_PIPELINE_SOURCE == 'merge_request_event'
See here for more details on how to control what kicks off a pipeline.
You can find out what triggered a pipeline by looking at this variable:
CI_PIPELINE_SOURCE
From docs:
See here for more details about pre-defined variables.

Outdated CloudFormation schemas for IntelliJ/WebStorm?

I am getting schema validation warnings "Value should be one of" when having YAML file opened with CloudFormation template. It seems that IntelliJ/WebStorm are validating YAML against remote JSON schemas if available, in this case it seems to be: https://www.schemastore.org/json/ (as stated here: https://www.jetbrains.com/help/phpstorm/yaml.html#remote_json)
But for some reason as simple type as CloudFront distribution does not validate:
Type: AWS::CloudFront::Distribution but for example Type: AWS::ECS::TaskDefinition is accepted fine. For me it looks like that https://www.schemastore.org/json/ should be up to date. Anyone else experiencing similar issues?
I also tried this plugin https://plugins.jetbrains.com/plugin/7371-aws-cloudformation, but that doesnt seem to even work when YAML validation by JSON schema is disabled:

Serverless - large app

I am new to serverless framework.
I am starting a Rest API that will have multiple routing, for example:
GET user/{userid}
POST user
GET account/{accountid}
POST account
Do I need 2 services - account + users?
What are the best practices? If 2 services then 2 serverless.yml? does any one have example for serverless large app?
Thanks everybody!
For your example it's enough to use one service (serverless.yml).
You can use one lambda in 1 service to handle users and accounts requests.
functions:
<your-function-name>:
handler: handler.execute
events:
- http:
path: /user/{userid}
method: get
- http:
method: post
path: /user
- http:
path: /account/{accountid}
method: get
- http:
method: post
path: /account
Or you can create 2 lambdas (one per entity)
functions:
user:
handler: userHandler.execute
events:
- http:
path: /user/{userid}
method: get
- http:
method: post
path: /user
account:
handler: accountHandler.execute
events:
- http:
path: /account/{accountid}
method: get
- http:
method: post
path: /account
It really depends on the architecture you want for your app.
Take a look here, I think it might help you decide what you want really want.
If you have a lot of endpoints at one point you might need 2 services, because you'll reach the resources limit. You can always set the pathmapping if you want to have one single url for your app.
resources:
Resources:
pathmapping:
Type: AWS::ApiGateway::BasePathMapping
Properties:
BasePath: <your base for this service>
DomainName: mydomain.com
RestApiId:
Ref: ApiGatewayRestApi
Stage: dev
In few words - as you wish.
Technically: you can aggregate several functions into one, and invoke specific one, base on event parameter attribute. Moreover - you can run express/koa server inside AWS lambdas (or other FaaS) without any pain.
+ as a bonus you can use ANY and {any+}:
events:
- http:
path: /foo
method: ANY
- http:
path: /foo/{any+}
method: ANY
But in general - depends on situation. If you invoke specific endpoint very often, then it's better to move it to separated lambda. If you know that bench of endpoints invoked seldom - it's better to aggregate them under one lambda. Especially if you use warm-up.
One thing I like about architecture is that there is no right answer, but the most suitable for your problem/situation. On top of that, best/good practices are a guideline to help you on all of that.
In my case, I added the complexity to the code so my CRUD paths are very simple.
cms/{entity} or cms/{entity}/{id}. Entity means my collection in my BackEnd, so I know which model to use.
Applying this to your question you would have something like:
GET {entity}/{userid}
POST {entity}
With this solution you don't have to create a function for each new entity you create in your DB. It also has the Open-Closed concept of the SOLID principles.

Retrieve a list of all installed macOS Services?

You can programmatically invoke services if you already know the name of the service. As best I understand, the Services menu is built by calling a validation method on each published Service.
Is there a way to access a list of installed Services without using the Services user dialog?
EDIT: I don't mean background processes. I am talking about the items in the Services menu in Finder. Overview of what they are here.
There is an API provided by Apple, documented here - https://developer.apple.com/documentation/coreservices/launch_services
Note that you need to have your service registered in system database and consuming code needs to know about its existence.
I hope this helps.
The somewhat supported (but poorly documented) approach is to call lsregister and parse the output. The output does not have a documented or guaranteed format, however.
You run it this way for the commandline:
LSREGISTER="/System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/LaunchServices.framework/Versions/A/Support/lsregister"
${LSREGISTER} -dump
(Yes, it's deeply buried and not in PATH.)
This dumps a ton of information. You just want services, which look like this:
--------------------------------------------------------------------------------
service id: FileMerge/Compare To Master (0x16f8)
menu: FileMerge/Compare To Master
port: FileMerge
message: diffVersusMasterService
timeout: -1
send types: "NSFilenamesPboardType"
The part you want is the "menu" tag:
$LSREGISTER -dump | grep ^menu: | cut -c 29-
Obviously you can parse this more directly in code, but the only way I know of that's even vaguely supported is to run lsregister.
OK, that's obnoxious. If you're willing to use private APIs, it's pretty straightforward. Define an interface for LSServiceRecord:
#interface LSServiceRecord
+ (id)enumerator;
- (NSString *)localizedMenuItemTitle;
#end
And then you can enumerate over them to get the menu titles:
id enumerator = [LSServiceRecord enumerator];
for (id item in enumerator) {
NSLog(#"%#", [item localizedMenuItemTitle]);
}
You might find the portName property helpful. It's the name of the application that registered the service. You might also find +[LSServiceRecord enumeratorForContentsOfPasteboard:] useful if you're trying to limit it to valid services.
If you want to explore more, I recommend Hopper, and looking in LaunchServices framework.
try launchctl list, see https://guide.macports.org/chunked/reference.startupitems.html for some more info,.

Worklight Analytics payload

Worklight 6.2.0, Mobile Web Environment
The Worklight Info Center offers three formulations for logging an analytic message
WL.Analytics.log('my record');
WL.Analytics.log({data: [1,2,3]});
WL.Analytics.log({data: [1,2,3]}, 'MyData');
I am successfully using the first of these, but the other two produce no analytics and my fail() function is not fired.
I see in the online tutorials a further formulation
WL.Analytics.log({_activity: "myActivity" });
this too produces no output.
Question: Are there other formulations that do work?
All calls other than
WL.Analytics.log('my record')
were intended for Analytics features that were not implemented or did not make it into the Worklight 6.2 release. Clearly this is not reflected in the documentation. I will open a defect to either have the logs searchable or have this limitation reflected in the documentation.
If the following call:
WL.Analytics.log({_activity: "myActivity" });
does not result in activities being searchable in the 'Activites' page of the Analytics console, then this is a defect for Worklight 6.2.
I can confirm that all of the above issues are fixed for the next release of Worklight (whether its through code fixes or documentation). If you need some of these fixes backported to a previous release of Worklight, please open a PMR so that we can begin that process.
I would suggest passing in the stringify property as true.
var obj = {name : "bob", age : 100};
WL.Logger.config({stringify : true, pkg: 'myActivity'});
WL.Logger.debug(obj);
If you want a pretty format you could pass in the pretty property
WL.Logger.config({stringify : true, pretty : true, pkg: 'myActivity'});
WL.Logger.debug(obj);
Hope this helps.