In VSTS, how do I add an email notification task to the build and release definitions? - msbuild

I am new to VSTS Continuous Integration and configuring one of my first sets of build and release definitions.
Is there a task I can add that will allow me to notify certain team members of events such as a build failed, or a release is ready?

In the latest iteration of the notification settings https://account.visualstudio.com/_notifications (where there is a single global settings page), if (like me) you don't run everything full screen:
Check the add button isn't off the side of the screen!

For build fail notification, you can set on https://account.visualstudio.com/_notifications -> new -> select Build for category -> select A build fails for template -> next -> select other email for Deliver to -> add the email address and separate with semicolon (;) for multiple email addresses -> you can filter for a specific team project -> finish.
For release success notification, there is no present setting for this. You can create your extension for release success email notification. More detail, you can refer sending email notification.

Related

How do I correctly configure Segment, Mixpanel and Branch for a React Native app?

After several days of trying different approaches, we am running out of options. Our aim is to have a Reach Native app, which can be opened/installed using a Branch link. Data of the usage of the user and how they came to be in the app (attribution) is sent eventually to Mixpanel. As we have several other places we would like to analyse usage data, and we would like to keep platform coupling weak, we opted to use Segment.
The key challenge is getting the attribution data has determined by Branch to be applied to the same distinct ID in Mixpanel as the general app usage.
E.g. the event for "Viewed article 123" is assigned to user ABC, while the event from the same person, on the same phone, during the same session, saying "App opened via QR code" is assigned to user DEF. (Those event names are just illustrative and context is actually in the metadata.)
So far I have tried a React Native led setup using
import analytics from '#segment/analytics-react-native';
import branch from '#segment/analytics-react-native-branch';
import mixpanel from '#segment/analytics-react-native-mixpanel';
analytics.setup(WRITE_KEY, {
using: [branch, mixpanel]
});
We have also tried a more native approach where Segment and Branch are initialized and share ids within MainApplication.java
Analytics analytics = new Analytics.Builder(this, WRITE_KEY).trackApplicationLifecycleEvents().recordScreenViews().build();
Analytics.setSingletonInstance(analytics);
Branch.getInstance().setRequestMetadata(
"$mixpanel_distinct_id",
Analytics.with(this).getAnalyticsContext().traits().anonymousId()
);
Branch.getInstance().setRequestMetadata(
"$segment_anonymous_id",
Analytics.with(this).getAnalyticsContext().traits().anonymousId()
);
We also tried another version where Mixpanel was also initialized in MainApplication.java and the distinct Id passed from there.
MixpanelAPI mp = MixpanelAPI.getInstance(this, MP_KEY);
Branch.getInstance().setRequestMetadata("$mixpanel_distinct_id", mp.getDistinctId());
While experimenting with these native setups (in several different permutations) we were calling Segments useNativeConfiguration method.
With records to the actual cloud routing, we have also tried every reasonable setup we can imagine. Including:
Segment -> MP AND Branch -> MP
Segment -> MP AND Segment -> Branch -> MP
Segment -> MP AND Segment <-> Branch -> MP (notice Branch is both importing and exporting Segment data)
Segment -> Branch -> MP
We have tried many different permutations of the possible configurations and none have created correctly joined-up data. We are open to replacing Segment or Branch with an alternative, but Mixpanel and React Native cannot be replaced due to business constraints.
The latest cloud configuration of "Segment -> Branch -> MP", showed the greatest promise, but even though the documentation says Identify calls are passed to Branch, upon debugging they are not. Meaning the profile of users can never be populated in Mixpanel.
Any help that can be provided would be greatly appreciated.
Ok, we think we have got this working in an acceptable way with the aforementioned technologies.
The setup we ended up with, was to initialise Analytics (Segment) and Branch at the native level. Set both "$mixpanel_distinct_id" and "$segment_anonymous_id" to be the Segment anonymous ID for different stages of the pipeline. And to break the connections between Segment and Branch. So in the end we had the two following paths:
App -> Segment -> Mixpanel
App -> Branch -> Mixpanel
As only events from Branch (prefixed [BRANCH]) in Mixpanel would have the user's attribution, we then set up a Lambda function to read these events and then call the Mixpanel API to set user properties for UTM medium, campaign and channel. To get this connected we had to reconnect Branch to export events to Segment, using an entirely separate source, to then send on to Lambda as a destination. Something like:
Branch -> Lambda -> Mixpanel
Code snippets:
// MainApplication.java
this.initializeSegment();
RNBranchModule.getAutoInstance(this);
RNBranchModule.setRequestMetadata("$segment_anonymous_id",
Analytics.with(this).getAnalyticsContext().traits().anonymousId()
);
RNBranchModule.setRequestMetadata("$mixpanel_distinct_id",
Analytics.with(this).getAnalyticsContext().traits().anonymousId()
);
private void initializeSegment() {
Analytics.Builder builder = new Analytics.Builder(this, BuildConfig.SEGMENT_WRITE_KEY)
.flushQueueSize(20)
.collectDeviceId(true)
.trackApplicationLifecycleEvents();
if (BuildConfig.DEBUG) {
builder.logLevel(Analytics.LogLevel.VERBOSE);
}
Analytics analytics = builder.build();
Analytics.setSingletonInstance(analytics);
}
import analytics from '#segment/analytics-react-native';
analytics.useNativeConfiguration();

How can I create an alert in OMS when a Linux service is stopped?

I am trying to create an alert in OMS when a Linux service is stopped.
AFAIK we have below options to accomplish your requirement.
Option I:
If the service/deamon is configured by default configurations then the service log information would be logged under /var/log/messages.
Whenever a Linux service is stopped if the information is getting logged in /var/log/messages file then follow below steps to get alerted:
Goto Azure portal -> YOURLOGANALYTICSWORKSPACE -> Advanced settings -> Data -> Syslog -> type 'daemon' -> click '+' -> click 'save'. For more information, refer this https://learn.microsoft.com/en-us/azure/azure-monitor/platform/data-sources-syslog link.
Goto Azure portal -> YOURLOGANALYTICSWORKSPACE -> Logs -> type 'Syslog' -> click 'Run'. Check 'SyslogMessage' column in the output. Output also have various other useful columns like SeverityLevel, ProcessName and ProcessID which you may use while developing the query based on your need.
So query would look something like shown below.
Syslog | where (Facility == "daemon") | where (SyslogMessage has
"xxxxxxx" and SyslogMessage has "stopping") | summarize
AggregatedValue= any(SyslogMessage) by Computer, bin(TimeGenerated,
30s)
Create and configure custom log alert in the Log Analytics workspace alert tile by using above query. Set the threshold value, frequency, period details while configuring an alert. Provide intended action group to get notified on alert getting triggered.
Option II:
If the service/deamon is custom configured then the service log information would be logged in that particular custom path.
Whenever a Linux service is stopped if the information is getting logged in /xxxx/yyyy/zzzz.txt file (or other examples are /aaaa/bbbb/jenkins/jenkins.log, cccc/dddd/tomcat/catalina.out, etc.) then follow below steps to get alerted:
Goto Azure portal -> YOURLOGANALYTICSWORKSPACE -> Advanced settings -> Data -> Custom Logs -> click 'Add +' -> .... For more information, please refer this https://learn.microsoft.com/en-us/azure/azure-monitor/platform/data-sources-custom-logs link.
Goto Azure portal -> YOURLOGANALYTICSWORKSPACE -> Logs -> type 'CUSTOMLOGNAME_CL' -> click 'Run'. Check something like 'RawData' column in the output.
So query would look something like shown below.
CUSTOMLOGNAME_CL | where (RawData has "xxxxxxx" and RawData has
"stopping") | summarize AggregatedValue= any(RawData) by Computer,
bin(TimeGenerated, 30s)
Create and configure custom log alert in the Log Analytics workspace alert tile by using above query. Set the threshold value, frequency, period details while configuring an alert. Provide intended action group to get notified on alert getting triggered.
Option III:
In case your service log data can't be collected with custom logs also then send the data directly to Azure monitor using HTTP Data Collector API that is explained here -> https://learn.microsoft.com/en-us/azure/azure-monitor/platform/data-collector-api.
An example using runbooks in Azure Automation is provided in Collect log data in Azure Monitor with an Azure Automation runbook is explained here -> https://learn.microsoft.com/en-us/azure/azure-monitor/platform/runbook-datacollect.
Hope this helps!! Cheers!! :)

Bamboo - return custom messages and error codes.

From the looks of it, Bamboo only returns a 0 or 1 if a script fails or succeeds. Is it possible to add any customization at all in order to get more information on why a script failed?
I have a script that builds several repositories and would like very detailed information on any failures that may occur (which repo failed, why, etc.).
Is there any way to handle this through Bamboo? I can create a log file that outputs the data I want, but if possible I would like to see any issues through Bamboo OR the Bamboo email that can be sent whenever a failure occurs. Is there a way to customize the email to include text from a text file (my log file)?
Bamboo expects exit 0 for a successful execution. Anything else results in a failure. However, this exit code is listed in the respective build log like below.
simple 14-Aug-2017 14:59:29 Failing task since return code of [mvn clean package] was 1 while expected 0
If you want the log snip to be sent in the email, you can just customise the Email notification template in WEB-INF/classes/notification-templates/. Some content in notifications can be configured via system properties, such as the number of log lines to include in email notifications that display log information.
Hope that helps.

Not able to approve tasks in Documentum D2

In one of the workflows, task notification has been sent to an user's inbox. When the user tries to acquire the task, he is getting the error like missing package for the current task of the workflow.
Kindly provide me the suggestion on resolving this issue.
During the Workflow definition if the Package is added as Mandatory Package, and during task completion process if the task attachment is missing, user is expected to get this message in D2 client.
Please check weather the Task attachment is missing from repository in this scenario.
You can either:
Verify if the associated activity definition for the task (from process builder for example) has defined correctly the package/s.
or
Verify if the attached object (document/s or folder/s) to the task exists in repository or the user has at least BROWSE permission on it.
Also, a good idea is to check the D2 app server and JMS logs.

How to find the Transport Request with my custom objects?

I've copied two Function Modules QM06_SEND_PAPER_STEP2 and QM06_FM_TASK_CLAIM_SEND_PAPER to similar Z* Function Modules. I've put these FMs into a ZQM06 Function Group which was created by another developer.
I want to use Transaction SCC1 to move my developments from one client to another. In transaction SE01 Transport Organizer I don't find the names of my 2 function modules anywhere.
How can I find out the change request with my work?
I copied the FM in order to modify functionality and I know FMs are client independent.
Function modules, like other ABAP workbench entities, are client-independent. That is, you do not need to copy them between clients on the same instance.
However, you can find the transport request that contains your changes by going to transaction SE37, entering the name of your function module, and then choosing Utilities -> Versions -> Version Management from the menu.
Provided you did not put the changes into a local package (like $TMP) the system will have asked you for a transport request when you saved or activated your changes, that is, unless the function group is in a modifiable transport request, in which case it would have created a new task for your user under that request which will contain you changes. To check the package, use Goto -> Object Directory Entry from the menu in SE37.
Function modules are often added to transports under the function group name, especially if they're new.
The easiest way to fin dhte transport is to go to SE37, display the function module, and then go to Version Management.
The answer from mydoghasworms is correct. Alternatively you can also use transaction SE03 -> Search for Objects in Requests/Tasks (top of the transaction screen) -> check the box next to "R3TR FUGR" and type in your function group name.