Is it possible to change the Job for existing Stream Analytics Module - azure-stream-analytics

I deployed a new Stream Analytics Job and want to replace it in already created ASA module with its own Job.
But I see only an interface for updating the initial job with
Module outdated - click here to update and Update ASA Module which on click generates a JSON with update properties for the old job.
Is it possible to replace it with a new one or I should create a new module ?

To change the job for your existing Steam Analytics Module on IoT Edge, you don't have to create a new module. Here are the steps:
Create a new ASA job / modify the existing job
Click on Update ASA Module
Deploy the new job

Related

How to run flow with minimal configuration in mosaic

I'm developing some ETL jobs using Mosaic Decisions. While running the job, it's submitting the job to Spark using the default configurations. This default configuration is really huge and I don't need that much for development (as I am using less number of records for unit testing while development).
Is there a way I can instruct Mosaic to use less Spark resources for my development? So that I won't unnecessarily block the resources of the cluster?
Yes, it is possible to achieve that. To do so, you will have to create a new run configuration with the desired resource configuration from the Manager persona (LTI Mosaic Manager). Then, simply execute the flow with the newly created run configuration.
Follow the steps below to create a new run configuration:
Log in to Mosaic Decisions and on the top right corner, click on Projects, and then on Manager.
In Mosaic Manager, click on the Runconfig tab in the left navigation panel.
Click on Add New Configuration. Provide the desired configurations and click Save.
Go back to Mosaic Decisions, and execute the desired flow with the newly created run configuration

Data Factory - Data Lake File Created Event Trigger fires twice

I'm developing Pipeline in Azure Data Factory V2. It has very simple Copy activity. The Pipeline has to start when a file is added to Azure Data Lake Store Gen 2. In order to do that I have created a Event Trigger attached to ADLS_gen2 on Blob created. Then assigned trigger to pipeline and associate trigger data #triggerBody().fileName to pipeline parameter.
To test this I'm using Azure Storage Explorer and upload file to data lake. The problem is that the trigger in Data Factory is fired twice, resulting pipeline to be started twice. First pipeline run finish as expected and second one stays in processing.
Has anyone faced this issue? I have tried to delete the trigger in DF and create new one but the result was the same with new trigger.
I'm having the same issue myself.
When writing a file to ADLS v2 there is an initial a CreateFile operation and a FlushWithClose operation and they are both triggering a Microsoft.Storage.BlobCreated event type.
https://learn.microsoft.com/en-us/azure/event-grid/event-schema-blob-storage
If you want to ensure that the Microsoft.Storage.BlobCreated event is triggered only when a Block Blob is completely committed, filter the event for the FlushWithClose REST API call. This API call triggers the Microsoft.Storage.BlobCreated event only after data is fully committed to a Block Blob.
https://learn.microsoft.com/en-us/azure/event-grid/how-to-filter-events
You can filter out the CreateFile operation by navigating to Event Subscriptions in the Azure portal and choosing the correct topic type (Storage Accounts) and subscription and location. Once you've done that you should be able to see the trigger and update the filter settings on it. I removed CreateFile.
On your Trigger definition, set 'Ignore empty blobs' to Yes.
The comment from #dtape is probably what's happening underneath, and toggling this ignore empty setting on is effectively filtering the Create portion out (but not the data written part).
This fixed the problem for me.

Using inherited process model for existing collection on Azure DevOps Server 2019

With Azure DevOps Server 2019 RC it is possible to enable inherited process model on new collections (see release notes). Is there any way to use the inherited process model also for existing collections, where no customization on the process was made
Inherited process model is currently only supported for new collections created with Azure DevOps Server 2019 and not for existing collections.
See this Developer Community entry which asks for it.
I added a set of comments on how I hacked my way from an existing XML collection with a set of Projects to the Inherited type.
https://developercommunity.visualstudio.com/content/idea/614232/bring-inherited-process-to-existing-projects-for-a.html
Working as long as a vanilla workflow is applied to an existing XML collection before doing the voodoo thing.
Not exactly an answer for your question but we recently had the same task and I want to share how we handled this. We also wanted to move to the inherited model and we did not want to do any hacking. So we decided to create a new Collection on our Azure Devops Server 2020 with the inherited model and also migrate our tfvc repository to git.
Create the new Collection. Documentation
git-tfs to create a local repository from our tfvc repository and push it
azure-devops-migration-tools to copy all work items from the old collection to the new collection
In the old collection add the ReflectedWorkItemId for every WorkItem look here
In the new collection add the ReflectedWorkItemId for every WorkItem by using the process editor
Pro-Tip: create a full backup of the new collection to revert to this state easily. I had multiple try-error-restores.
You can't migrate shared steps or shared parameters like this, because you can't edit these work item types in the new collection. There is a workaround
We used the WorkItemTrackingProcessor to migrate all Epics/Features/Product Backlog Items/Bugs/Tasks/Test Cases. Then the same processor but with the mentioned workaround for Shared Steps and Shared Parameters.
This processor also migrates the Iterations and Area Paths
Finally we used the TestPlansAndSuitesMigration to migrate the Test Plans and Suites
For speeding up the migration, you can chunk the work items (for example by date or id) and start the migration multiple times.
Our build and release pipelines + task groups were migrated manually by import and export
We migrated the variable groups by using the API
The teams were created manually and we added the default area paths also by hand

Rename Google Compute Engine VM Instance

How do I rename a Google Compute Engine VM instance?
I created a new LAMP server and I'd like to rename it in the "VM Instances" dashboard.
I've tried renaming the Custom metadata, but that didn't seem to replicate to the dashboard.
I tried the solution provided by #Marius I . It works, but I lost my description, my metas, the tags and the permissions I've set on the old instance. I had to copy my metas, had to make sure the zone for the new instance was the same as the original, and had to check that the pricing was the same.
I think, it's best to just create a clone of your original instance, this way don't have to manually copy/set them on the new instance.
As #Marius said, create a snapshot of your disk ( DO NOT skip this part: you may lose all your files/configuration )
Make sure you completed the step 1.
Clone your instance (“Create similar” button)
Name your cloned instance the way you want.
Make sure to select the snapshop of your disk created at #1 ( make sure you select the same typeof disk as well: if your original disk was SSD for example, you have to select if you want the new disk to be SSD too )
Make sure your IPs are set correctly
You're done :)
Another way to do this is:
snapshot the disk of the existing instance
create a new disk from that snapshot
create a new instance with that disk and give it the name you would like
It sounds time-consuming, but in reality should take 5 minutes.
you can't ...! Once VM is created, you can’t change the Instance Name
There's now a "native" way to do this. The feature is currently in Beta and only available with gcloud and via the API. With gcloud you can run:
$ gcloud beta compute instances set-name CURRENT_NAME -—zone=ZONE -—new-name=NEW_NAME
Some caveats:
You'll need to shut down the VM first
The Developer Console UI won't be aware of the rename until you do a browser refresh
See the official documentation for more details.
Apart from the hacks above, it's not possible.
Yet, it has been requested on uservoice and has received 593 votes. (as of 2018) Currently, it's the topmost "planned" item.
I got lost in the instructions, so I thought I include screenshots because the navigation is confusing. I hope this helps you.
Stop your instance
Click on the stopped instance name
In VM Instance Details, scroll down and click on the disk
Click on Create snapshot
give it a name like snapshot-1 (or your new instance name)
click on Create button
click on newly created snapshot
Click on Create Instance
Give your instance the new name and configure the rest of the VM.
When dealing with a robust system, it's necessary to have a way to bring up a system quickly when it goes down. This could be via custom scripts, salt, ansible, etc.
So, if you want to change your instance name, delete the instance, create a new one with the correct name, and run your script again :)
To answer your question directly. You cannot edit the VM Instance name.
However, you can create New VM instance using your old disk. To meet the VM instance name that you want.
Kindly see below procedure:
Go to Compute Engine Page
Go to Disk Page
Select the disk of your VM instance that you want to create a snapshot
Click the three dot image same line of your disk
Select +Create Snapshot (You will be go to Create Snapshot page). Kindly name your snapshot (backup)
Just Click Create.
Then once you have created a snapshot for your VM instance disk, you may now proceed on creating your new instance from snapshot pointing to other region which you can consider such: us-central1, us-west1 and us-west2. Please see below procedure:
Go to Snapshot Page
Select snapshot "backup" (You should be on Snapshot details Page)
Click Create Instance (Choose best name for your new VM Instance)
Please select the region best fit for you (us-central1, us-west1 and us-west2) except us-east1.
Lastly, Click Create
Machine images are now in pre-GA!
This is currently the easiest way to clone an instance without losing your instance configurations, check this comparison table.
Detailed steps:
Go to Compute Engine > Virtual Machines > Machine Images
Click on create Machine Image
Select your current instance under Source VM instance and click create
Once the image becomes ready go to Machine image details and click on create instance
The form will be populated by your existing instance configuration and you'll be able to change them before creating the instance!
Sorry to resurrect this thread after so long, but when I searched for an answer I kept ending up in this article... :-)
The Cloud SDK now allows renaming an instance directly, provided it's stopped:
The command looks like this:
gcloud beta compute instances set-name INSTANCE_NAME --new-name=NEW_NAME [--zone=ZONE] [GCLOUD_WIDE_FLAG …]
This is not available yet in the UI.
The following worked for me:
gcloud beta compute instances set-name currentname --new-name=newname
Googler checking in. We're rolling out this feature (renaming a VM) to all users in the cloud console. Public documentation.
In the Google Cloud console:
Go to the VM instances page.
Go to VM instances
In the Name column, click the name of the VM.
Click Stop stop.
Click Edit edit.
In Basic information > Rename > VM instance name, enter a new name
for the VM.
Click Save.
Click Start / Resume play_arrow.
Using the gcloud Command Line Interface:
gcloud compute instances stop INSTANCE_NAME
gcloud beta compute instances set-name INSTANCE_NAME --new-name=NEW_INSTANCE_NAME
I also wanted to let you all know that we monitor these forums and use your feedback to influence our roadmap. Thank you for your engagement!
I am trying to do this 03/2019 and I saw a new option on panel
click Instance link
on top menu you will see "Create Similar"
could work if you need same machine without data. (solved my case)
if you need a full copy then you should create a snapshot and clone it.
This is now possible via the web console:

Creating multiple branches/streams in RTC source control

What is the standard method of creating multiple streams of development of the same project in RTC source control?
Currently to create a single stream I create a repository workspace & its corresponding stream. I check in the project to the workspace and then deliver it to this new stream. To create a new stream of development for the project do I need to repeat this process or is there a better way, maybe using the command line ?
No, you don't need to repeat the process.
I would recommend putting a baseline on the component you delivered in the first stream. Or put a snapshot on the first stream. That will label all the components in that stream.
Then you create a second stream, which you can :
fill component by component, specifying for each one a baseline
or specifying directly at snapshot, which will put all the components with their associated labels in that new stream.
Then you create your repository workspace and start working.
So the idea behind a new stream is to specify from what version you want to start working.
Hence the baselines or snapshot put in the first stream : that will help initialize the next stream.
Without having to re-import everything.